Age | Commit message (Collapse) | Author |
|
of used handles and a fast handle factory that's thread-
correct.
|
|
some problems disabling pipelining on a multi handle with outstanding
requests so build a more conservative system that allows requests
to drain before setting curl multi options. Would rather not have
this but it is significantly safer. "HttpPipelining" debug setting
is now fully dynamic. Connection limits can also be made dynamic
in the near future. Upped the default connection count back to 8 for
now but will revisit this in the tuning phase. It might be time to
combine mesh and textures into a single asset class. For normal
server operations that would be a clear path, but for server under
load, the current scheme may be better. Minor cleanup in logging
to elminate some redundant strings. Might add some more tracing to the
stall logic 'just in case'.
|
|
Do some runtime code avoidance and skip unnecessary libcurl and
syscall invocations.
|
|
Much improved. Unified the global and class options into a single
option list. Implemented static and dynamic setting paths as much
as possible. Dynamic path does require packet/RPC but otherwise
there's near unification. Dynamic modes can't get values back yet
due to the response/notifier scheme but this doesn't bother me.
Flatten global and class options into simpler struct-like entities.
Setter/getter available on these when needed (external APIs) but code
can otherwise fiddle directly when it knows what to do. Much duplicated
options/state removed from HttpPolicy. Comments cleaned up. Threads
better described and consistently mentioned in API docs. Integration
test extended for 503 responses with Reply-After headers.
|
|
|
|
Add to-do list to _httpinternal.h to guide anyone who
wants to pitch in and help.
|
|
The fetch state machine received a new timeout during the WAIT_HTTP_REQ
state. For the integration, rather than jump the state to done, we issue
a request cancel and let the notification plumbing do the rest without
any race conditions or special-case logic.
|
|
in library.
With this commit, the cleanup paths should be production quality. Unit tests have been
expanded to include cases requiring thread termination and cleanup by the worker thread.
Special operation/request added to support the unit tests. Thread interface expanded
to include a very aggressive cancel() method that does not do cleanup but prevents the
thread from accessing objects that will be destroyed.
|
|
Groundwork is used for the default class which currently represents
texture fetching. Class options implemented from API user into
HttpLibcurl. Policy layer is going to start doing some traffic
shaping like work to solve problems with consumer-grade gear.
Need to have dynamic aspects to policies and that starts now...
|
|
|
|
206/content-range hack in xport.
Retry/response handling is decided in policy so moved that there. Removed special case
206-without-content-range response in transport. Have this sitation recognizable in the
API and let callers deal with it as needed.
|
|
surprised me. Added a retry queue similar to ready queue to the
policy object which is sorted by retry time. Currently do five
retries (after the initial try) delayed by .25, .5, 1, 2 and 5
seconds. Removed the retry logic from the lltexturefetch module.
Upped the waiting time in the unit test for the retries. People
won't like this but tough, need tests.
|
|
Identified and reacted to the priority inversion problem we
have in texturefetch. Includes the introduction of a priority_queue
for the requests that are ready. Start some parameterization in
anticipation of having policy_class everywhere. Removed _assert.h
which isn't really needed in indra codebase. Implemented async
setPriority request (which I hope I can get rid of eventually along
with all priorities in this library). Converted to using unsigned
int for priority rather than float. Implemented POST and did
groundwork for PUT.
|
|
This is the first functional viewer pass with the HTTP work of the texture fetch
code performed by the llcorehttp library. Not exactly a 'drop-in' replacement
but a work-alike with some changes (e.g. handler notification in consumer
thread versus responder notification in worker thread).
This also includes some temporary changes in the priority scheme to prevent
the kind of priority inversion found in VWR-28996. Scheme used here does
provide liveness if not optimal responsiveness or order-of-operation.
The llcorehttp library at this point is far from optimally performing.
Its worker thread is making relatively poor use of cycles it gets and
it doesn't idle or sleep intelligently yet. This early integration step
helps shake out the interfaces, implementation niceties will be covered
soon.
|
|
hard stall
while allocating the first easy handle in a descent of the global initiailization
code but that doesn't seem to be a problem on TC machines. Perhaps the static
linking is creating multiple data copies. More work needed.
|
|
The unit/integration tests don't work yet as I'm still battling cmake/autobuild
as usual but first milestone passed.
|