Age | Commit message (Collapse) | Author |
|
linux
|
|
applied instead of white.
|
|
|
|
Per discussion with Richard, accept the type key for insert() and find() as a
template parameter rather than as std::type_info*. This permits (e.g.) some
sort of compile-time prehashing for common types, without changing the API.
Eliminate iterators from the API altogether, thus avoiding costs associated
with transform_iterator.
Fix existing references in llinitparam.h.
|
|
Back out code that selects LLTypeInfoLookup for the underlying map
implementation when KEY = [const] std::type_info*, because LLTypeInfoLookup's
API is changing to become incompatible with std::map. Instead, fail with
STATIC_ASSERT when LLRegistry's KEY is [const] std::type_info*.
Fix all existing uses to use std::type_info::name() string instead.
|
|
|
|
enable Havok Hyrbid (fulldebug) libs to link in Windows RelWithDebInfo. On other platforms, that flag will cause RelWithDebInfo to link against Havok fulldebug libs. The rest of the time, RelWithDebInfo will link to Havok Debug and Debug will link to Havok Fulldebug
|
|
|
|
context-sensitive menu option of "Show in linksets...".
|
|
platforms. This is incomplete and requires additional changes to the 3p-havok-source repo and the llphysicsextensions-src repo.
|
|
Well, achieved that by doing work in bulk when needed. But
turned into some additional things. Change timebase from
mS to uS as, well, things are headed that way. Implement
an HttpReplyQueue::fetchAll method (advertised one, hadn't
implemented it).
|
|
|
|
Maybe it's failing to correctly handle overloaded transform() methods?
|
|
A 416 will just mean there's no more data and whatever we have
is complete.
|
|
It seems MSVC doesn't like boost::make_transform_iterator() in the context I
was using it. Try directly invoking the iterator's constructor.
|
|
The original LLTypeInfoLookup implementation was based on two assumptions:
small overall container size, and infrequent normal-case lookup failures.
Those assumptions led to binary-searching a sorted vector, with linear search
as a fallback to cover the problem case of two different type_info* values for
the same type. As documented in the Jira, this turned out to be a problem. The
container size was larger than expected, and failed lookups turned out to be
far more common than expected.
The new implementation is based on a hash map of std::type_info::name()
strings, which should perform equally well in the success and failure cases:
no special-case fallback logic.
|
|
|
|
Tweaked the boost source as per Boost issue #6185 using 1.49.0 sources
and this picks up the new build. Debug viewer builds and runs.
|
|
|
|
|
|
|
|
Doesn't use sets or maps and so there's no ordering assumption to
be violated when priorities are changed. Should also be faster.
Still want to get rid of the ancillary list, however...
|
|
committed in changes set cf029fb1d6ee.
|
|
|
|
callback handler unexpectedly changing the navmesh state.
|
|
|
|
|
|
|
|
30-second hang doesn't break subsequent tests. Did this by
introducing threads into the HTTP server as I can't find the magic
to detect that my client has gone away.
|
|
|
|
|
|
|
|
First, try to issue ranged GETs that are always at least partially
satisfiable. This will keep Varnish-type caches from simply sending
back 200/full asset responses to unsatisfiable requests. Implement
awareness of Content-Range headers as well. Currently they're not
coming back but they will be someday.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Also added some comments and changed the callback userdata argument
to be an HttpOpRequest rather than a libcurl handle. Less code,
less clutter.
|
|
|
|
|
|
sort things out or use policy classes (eventually) to arrange low
and high priority traffic. Subjectively, I think this works better
in practice (as I haven't implemented a dynamic priority setter yet).
|
|
|
|
|
|
|
|
Think I have found the major factor that causes the Linksys WRT54G V5 to
fall over in testing scenarios: DNS. For some historical reason, we're
trying to use libcurl without any DNS caching. My implementation echoed
that and implemented it correctly and I was seeing a DNS request per request
on the wire. The existing implementation tries to do that and has bugs
because it is clearing caching DNS data querying only once every few
seconds. Once I started emulating the bug, comms through the WRT became
much, much more reliable.
|
|
|
|
|