Age | Commit message (Collapse) | Author |
|
llmessage or CoreHttp libraries. Updated tests.
|
|
compatible with old and new http libraries
|
|
retries to 5xx errors
|
|
for testing
|
|
|
|
|
|
|
|
|
|
|
|
host bake, map tile, etc - down the chain so LLTextureFetchWorker can adjust behavior as needed
|
|
LLTrace clearer
Count becomes CountStatHandle
Count.sum becomes sum(Count, value), etc.
|
|
|
|
http phase 1 Some missing counter initialization kept the debugger
from entering the startup state giving the appearance of a do-
nothing floater. Also found some unbound recursion that might need
looking at in the future. (There's a comment.)
|
|
|
|
|
|
fixes to merge
|
|
|
|
|
|
final removal of remaining LLStat code
|
|
|
|
This was yet another refresh from v-d because of significant changes
to lltexturefetch that would not have been resolvable by casual
application of any merge tool. There are still a few questions
outstanding but this is the initial, optimistic merge.
|
|
|
|
This doesn't really address 3325 directly but it is the result of research
done while hunting it down. First, this is a thread safety improvement for
canceled requests that have gone into resource wait state. I don't think
we've seen a failure there but there was a window. It also cleans the
resource wait queue earlier which lets us do less work and get requests
more quickly into llcorehttp by bypassing the resource wait state. With
this, I finally feel comfortable about rundown of requests.
|
|
A/B comparison with original code showed the newer issuing lower-priority
requests of the cache reader and some other minor changes. Brought them
into agreement (this is cargo-cult programming). Made the HTTP resource
semaphore an atomic int for rigorous correctness across threads. I
swear I'm going to tear down this code someday.
|
|
Dropped an argument during integration which made the total byte count read
lower than expected. Everything else is fine, however.
|
|
cleaned up LLStat and removed unnecessary includes
|
|
Isolate llcorehttp initialization into a utility class (LLAppCoreHttp)
that provides glue between app and library (sets up policies, handles
notifications). Introduce 'TextureFetchConcurrency' debug setting to
provide some field control when absolutely necessary.
|
|
Big delta was converting the new texture debugger support code
to the new library. Viewer manifest should probably get an eyeball
before release.
|
|
|
|
|
|
Seems to be working correctly. Not certain this is the fastest possible way
to provide a std::streambuf interface but it's visually acceptable.
|
|
log on exit.
With much trial-and-error, cleaned up the banner on the texture console and made everything
mostly fit. Added global cache read, cache write and resource wait count events to the
console display to show if cache is working. On clean exit, emit a log line to report
stats to log file (intended for automated tests, maybe):
LLTextureFetch::endThread: CacheReads: 2618, CacheWrites: 117, ResWaits: 0, TotalHTTPReq: 117
|
|
LLProxy support, HttpOptions starting to work, HTTP resource waiting fixed.
Non-LLThread-based threads need to do some registration or LLMutex locks taken out in these
threads will not work as expected (SH-3154). We'll get a better solution later, this fixes
some things for now. Tracing of operations now supported. Global and per-request (via
HttpOptions) tracing levels of [0..3]. The 2 and 3 levels use libcurl's VERBOSE mode
combined with CURLOPT_DEBUGFUNCTION to stream high levels of detail into the log. *Very*
laggy but useful. Simple GET request supported (no Range: header). Really just a
degenrate case of a ranged get but supplied an API anyway. Global option to use the
LLProxy interface to setup CURL handles for either socks5 or http proxy usage. This
isn't really the most encapsulated way to do this but a better solution will have to
come later. The wantHeaders and tracing options are now supported in HttpOptions giving
per-request controls. Big refactoring of the HTTP resource waiter in lltexturefetch.
What I was doing before wasn't correct. Instead, I'm implementing the resource wait
after the Semaphore model (though not using system semaphores). So instead of having
a sequence like: SEND_HTTP_REQ -> WAIT_HTTP_RESOURCE -> SEND_HTTP_REQ, we now
do WAIT_HTTP_RESOURCE -> WAIT_HTTP_RESOURCE2 (actual wait) -> SEND_HTTP_REQ. Works
well but the prioritized filling of the corehttp library needs some performance
work later.
|
|
surprised me. Added a retry queue similar to ready queue to the
policy object which is sorted by retry time. Currently do five
retries (after the initial try) delayed by .25, .5, 1, 2 and 5
seconds. Removed the retry logic from the lltexturefetch module.
Upped the waiting time in the unit test for the retries. People
won't like this but tough, need tests.
|
|
Went through all the code and tried to document lock and thread usage
in the module. There's a huge comment block introducing all of this
at the beginning and I believe it's correct (though not quite complete).
Keep it updated, people. Added a new state, WAIT_HTTP_RESOURCE, that's
sort of a side-state of SEND_HTTP_REQ. If we hit a high-water mark
for HTTP requests, the extra are shunted to the new state once. Once
levels fall to a low-water mark, we run through a wait list of UUIDs,
sort the valid ones by priority and release them for service. This
keeps the HTTP layer busy while leaving the active queue shallow enough
that requests can still be re-prioritzed cheaply. Priority model
changed. The new state uses the PRIORITY_LOW mask, the old users
of _LOW are now at PRIORITY_NORMAL and sleepers woken up after an
external event are kicked off at PRIORITY_HIGH. This combination
along with the new state should avoid priority inversion and keep
things running without resorting to an infinite pipeline. New
state displays as "HTW" with green text in the texture console.
Request cancelation and worker run-down should now be more
correct but this edge case may need more attention.
|
|
Identified and reacted to the priority inversion problem we
have in texturefetch. Includes the introduction of a priority_queue
for the requests that are ready. Start some parameterization in
anticipation of having policy_class everywhere. Removed _assert.h
which isn't really needed in indra codebase. Implemented async
setPriority request (which I hope I can get rid of eventually along
with all priorities in this library). Converted to using unsigned
int for priority rather than float. Implemented POST and did
groundwork for PUT.
|
|
|
|
|
|
This is the first functional viewer pass with the HTTP work of the texture fetch
code performed by the llcorehttp library. Not exactly a 'drop-in' replacement
but a work-alike with some changes (e.g. handler notification in consumer
thread versus responder notification in worker thread).
This also includes some temporary changes in the priority scheme to prevent
the kind of priority inversion found in VWR-28996. Scheme used here does
provide liveness if not optimal responsiveness or order-of-operation.
The llcorehttp library at this point is far from optimally performing.
Its worker thread is making relatively poor use of cycles it gets and
it doesn't idle or sleep intelligently yet. This early integration step
helps shake out the interfaces, implementation niceties will be covered
soon.
|
|
|
|
|
|
|
|
|
|
from cache
|
|
on and off.
|
|
|
|
|
|
|
|
HTTP when all objects loading are done.
|
|
As a viewer architect, I would like to understand how fast each of the components of the texture pipeline can run in isolation
|