Age | Commit message (Collapse) | Author |
|
First, try to issue ranged GETs that are always at least partially
satisfiable. This will keep Varnish-type caches from simply sending
back 200/full asset responses to unsatisfiable requests. Implement
awareness of Content-Range headers as well. Currently they're not
coming back but they will be someday.
|
|
|
|
|
|
|
|
The fetch state machine received a new timeout during the WAIT_HTTP_REQ
state. For the integration, rather than jump the state to done, we issue
a request cancel and let the notification plumbing do the rest without
any race conditions or special-case logic.
|
|
Big delta was converting the new texture debugger support code
to the new library. Viewer manifest should probably get an eyeball
before release.
|
|
|
|
|
|
|
|
Seems to be working correctly. Not certain this is the fastest possible way
to provide a std::streambuf interface but it's visually acceptable.
|
|
Only thing interesting in this changeset is the discovery that a sleep
in the fake HTTP server ties up tests. Need to thread that or fail on
client disconnect or something to speed that up and make it usable for
bigger test scenarios. But good enough for now...
|
|
<offset, length, fulllength>.
|
|
log on exit.
With much trial-and-error, cleaned up the banner on the texture console and made everything
mostly fit. Added global cache read, cache write and resource wait count events to the
console display to show if cache is working. On clean exit, emit a log line to report
stats to log file (intended for automated tests, maybe):
LLTextureFetch::endThread: CacheReads: 2618, CacheWrites: 117, ResWaits: 0, TotalHTTPReq: 117
|
|
LLProxy support, HttpOptions starting to work, HTTP resource waiting fixed.
Non-LLThread-based threads need to do some registration or LLMutex locks taken out in these
threads will not work as expected (SH-3154). We'll get a better solution later, this fixes
some things for now. Tracing of operations now supported. Global and per-request (via
HttpOptions) tracing levels of [0..3]. The 2 and 3 levels use libcurl's VERBOSE mode
combined with CURLOPT_DEBUGFUNCTION to stream high levels of detail into the log. *Very*
laggy but useful. Simple GET request supported (no Range: header). Really just a
degenrate case of a ranged get but supplied an API anyway. Global option to use the
LLProxy interface to setup CURL handles for either socks5 or http proxy usage. This
isn't really the most encapsulated way to do this but a better solution will have to
come later. The wantHeaders and tracing options are now supported in HttpOptions giving
per-request controls. Big refactoring of the HTTP resource waiter in lltexturefetch.
What I was doing before wasn't correct. Instead, I'm implementing the resource wait
after the Semaphore model (though not using system semaphores). So instead of having
a sequence like: SEND_HTTP_REQ -> WAIT_HTTP_RESOURCE -> SEND_HTTP_REQ, we now
do WAIT_HTTP_RESOURCE -> WAIT_HTTP_RESOURCE2 (actual wait) -> SEND_HTTP_REQ. Works
well but the prioritized filling of the corehttp library needs some performance
work later.
|
|
The NORMAL range doesn't do any sleeping at all and so we'll
spin the core harder than we already are. Bring all idlers
into the same range.
|
|
|
|
206/content-range hack in xport.
Retry/response handling is decided in policy so moved that there. Removed special case
206-without-content-range response in transport. Have this sitation recognizable in the
API and let callers deal with it as needed.
|
|
|
|
surprised me. Added a retry queue similar to ready queue to the
policy object which is sorted by retry time. Currently do five
retries (after the initial try) delayed by .25, .5, 1, 2 and 5
seconds. Removed the retry logic from the lltexturefetch module.
Upped the waiting time in the unit test for the retries. People
won't like this but tough, need tests.
|
|
|
|
now avoiding doing HTTP fetches for read data. Not certain it's
completely correct but the difference is already significant.
|
|
Went through all the code and tried to document lock and thread usage
in the module. There's a huge comment block introducing all of this
at the beginning and I believe it's correct (though not quite complete).
Keep it updated, people. Added a new state, WAIT_HTTP_RESOURCE, that's
sort of a side-state of SEND_HTTP_REQ. If we hit a high-water mark
for HTTP requests, the extra are shunted to the new state once. Once
levels fall to a low-water mark, we run through a wait list of UUIDs,
sort the valid ones by priority and release them for service. This
keeps the HTTP layer busy while leaving the active queue shallow enough
that requests can still be re-prioritzed cheaply. Priority model
changed. The new state uses the PRIORITY_LOW mask, the old users
of _LOW are now at PRIORITY_NORMAL and sleepers woken up after an
external event are kicked off at PRIORITY_HIGH. This combination
along with the new state should avoid priority inversion and keep
things running without resorting to an infinite pipeline. New
state displays as "HTW" with green text in the texture console.
Request cancelation and worker run-down should now be more
correct but this edge case may need more attention.
|
|
Implemented first global policy definitions to support SSL CA certificate configuration
to support https: operations. Fixed HTTP 206 status handling to match what is currently
being done by grid services and to lay a foundation for fixes that will be a response
to ER-1824. More libcurl CURLOPT options set on easy handles to do peer verification
in the traditional way. HTTP POST working and now reporting asset metrics back to
grid for the viewer's asset system. This uses LLSD so that is also showing as compatible
with the new library.
|
|
fixing priorities.
|
|
|
|
chunking data. Remove the stateful use of a seek pointer so
that shared read is possible (though maybe not interesting).
|
|
what normal requests do...
|
|
|
|
Identified and reacted to the priority inversion problem we
have in texturefetch. Includes the introduction of a priority_queue
for the requests that are ready. Start some parameterization in
anticipation of having policy_class everywhere. Removed _assert.h
which isn't really needed in indra codebase. Implemented async
setPriority request (which I hope I can get rid of eventually along
with all priorities in this library). Converted to using unsigned
int for priority rather than float. Implemented POST and did
groundwork for PUT.
|
|
|
|
|
|
Presumed to be a complete failure of texture pipeline to decode anything.
Got a fix from bao, a flag was not initialized properly in the texture pipeline.
|
|
|
|
|
|
connection failure happens
|
|
|
|
|
|
This is the first functional viewer pass with the HTTP work of the texture fetch
code performed by the llcorehttp library. Not exactly a 'drop-in' replacement
but a work-alike with some changes (e.g. handler notification in consumer
thread versus responder notification in worker thread).
This also includes some temporary changes in the priority scheme to prevent
the kind of priority inversion found in VWR-28996. Scheme used here does
provide liveness if not optimal responsiveness or order-of-operation.
The llcorehttp library at this point is far from optimally performing.
Its worker thread is making relatively poor use of cycles it gets and
it doesn't idle or sleep intelligently yet. This early integration step
helps shake out the interfaces, implementation niceties will be covered
soon.
|
|
will look at the local cache first
|
|
as in release
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|