Age | Commit message (Collapse) | Author |
|
|
|
Big delta was converting the new texture debugger support code
to the new library. Viewer manifest should probably get an eyeball
before release.
|
|
aggressive shutdown of a thread.
Some additional work let me enable a memory check for the clean shutdown case and
generally do a better job on other interfaces. Request queue waiters now awake
on shutdown and don't sleep once the queue is turned off. Much better semantically
for how this will be used.
|
|
in library.
With this commit, the cleanup paths should be production quality. Unit tests have been
expanded to include cases requiring thread termination and cleanup by the worker thread.
Special operation/request added to support the unit tests. Thread interface expanded
to include a very aggressive cancel() method that does not do cleanup but prevents the
thread from accessing objects that will be destroyed.
|
|
Groundwork is used for the default class which currently represents
texture fetching. Class options implemented from API user into
HttpLibcurl. Policy layer is going to start doing some traffic
shaping like work to solve problems with consumer-grade gear.
Need to have dynamic aspects to policies and that starts now...
|
|
Seems to be working correctly. Not certain this is the fastest possible way
to provide a std::streambuf interface but it's visually acceptable.
|
|
|
|
|
|
|
|
Initial version that should have enough of the plumbing to produce
a working adapter. Memory test is showing 8 bytes held after one
of the tests so I'm going to revisit that later. But basic
functionality is there going by the unit tests.
|
|
|
|
Only thing interesting in this changeset is the discovery that a sleep
in the fake HTTP server ties up tests. Need to thread that or fail on
client disconnect or something to speed that up and make it usable for
bigger test scenarios. But good enough for now...
|
|
<offset, length, fulllength>.
|
|
|
|
Pretty straightforward. Still don't like how I'm managing
the options block. Struct? Accessors? Can't decide. But
the options now speed up the unit test runs even as I add
tests.
|
|
|
|
|
|
log on exit.
With much trial-and-error, cleaned up the banner on the texture console and made everything
mostly fit. Added global cache read, cache write and resource wait count events to the
console display to show if cache is working. On clean exit, emit a log line to report
stats to log file (intended for automated tests, maybe):
LLTextureFetch::endThread: CacheReads: 2618, CacheWrites: 117, ResWaits: 0, TotalHTTPReq: 117
|
|
|
|
|
|
|
|
now.
|
|
|
|
Beefed up the metrics gathering in http_texture_load to get memory sizes and
cpu consumption on windows (still need to implement that on Mac & linux).
Ran runs with various idle loops with sleeps from 20 ms down to pure spinning,
varied Block allocation size from 1504 to 2^20 bytes. 2ms/2ms/65540 appears
to be a good spot under the test conditions (Win7, danu grid, client in Boston).
|
|
|
|
numbers.
This is a command-line utility to pull content down from a service through
the llcorehttp library to produce timings and resource footprints.
|
|
LLProxy support, HttpOptions starting to work, HTTP resource waiting fixed.
Non-LLThread-based threads need to do some registration or LLMutex locks taken out in these
threads will not work as expected (SH-3154). We'll get a better solution later, this fixes
some things for now. Tracing of operations now supported. Global and per-request (via
HttpOptions) tracing levels of [0..3]. The 2 and 3 levels use libcurl's VERBOSE mode
combined with CURLOPT_DEBUGFUNCTION to stream high levels of detail into the log. *Very*
laggy but useful. Simple GET request supported (no Range: header). Really just a
degenrate case of a ranged get but supplied an API anyway. Global option to use the
LLProxy interface to setup CURL handles for either socks5 or http proxy usage. This
isn't really the most encapsulated way to do this but a better solution will have to
come later. The wantHeaders and tracing options are now supported in HttpOptions giving
per-request controls. Big refactoring of the HTTP resource waiter in lltexturefetch.
What I was doing before wasn't correct. Instead, I'm implementing the resource wait
after the Semaphore model (though not using system semaphores). So instead of having
a sequence like: SEND_HTTP_REQ -> WAIT_HTTP_RESOURCE -> SEND_HTTP_REQ, we now
do WAIT_HTTP_RESOURCE -> WAIT_HTTP_RESOURCE2 (actual wait) -> SEND_HTTP_REQ. Works
well but the prioritized filling of the corehttp library needs some performance
work later.
|
|
The NORMAL range doesn't do any sleeping at all and so we'll
spin the core harder than we already are. Bring all idlers
into the same range.
|
|
For now, workaround...
|
|
Implemented/modified PUT & POST to not used chunked encoding for the request.
Made the unit test much happier and probably a better thing for the pipeline.
Have a cheesy static & dynamic proxy capability using both local options and
a way to wire into LLProxy in llmessages. Not a clean thing but it will get
the proxy path working with both socks5 & http proxies. Refactoring to get
rid of unneeded library handler and unified an HttpStatus return for all
requests. Big batch of code removed as a result of that and more is possible
as well as some syscall avoidance with a bit more work. Boosted the unit
tests for simple PUT & POST test which revealed the test harness does *not*
like chunked encoding so we'll avoid it for now (and don't really need it
in any of our schemes).
|
|
Beware of bad documentation. operator--(int) does not return what
the header claimed it did.
|
|
This brings in a copy of llmessage's llsdmessage testing server. We run
a mocked HTTP service to handle requests and the integration tests run
against it by picking up the LL_TEST_PORT environment variable when running.
Add some checks and output to produce useful info when run in the wrong
environment and when bad status is received. Later will add a dead port
as well so we can test that rather than use 'localhost:2'.
|
|
HttpRequest::update() honor time limit.
Generally, opaque data operations are expected to be over 'void *' and have
now converted interfaces to do that. Update() method honors millisecond limit to dwell
time. Might want to homologate the millis/uSecs mix later....
|
|
206/content-range hack in xport.
Retry/response handling is decided in policy so moved that there. Removed special case
206-without-content-range response in transport. Have this sitation recognizable in the
API and let callers deal with it as needed.
|
|
cleanup.
Our logging holds on to a changing bit of memory between operations and the memory
leak detection I'm using senses this and complains. So, for now, disable the
final memory check on Mac & Linux, leave it active on Windows. Solve this for
real some other day. Add try/catch blocks to do cleanup in unit tests that go
wrong so that we don't get a cascade of assertion failures when subsequent tests
find singletons still alive.
|
|
|
|
|
|
surprised me. Added a retry queue similar to ready queue to the
policy object which is sorted by retry time. Currently do five
retries (after the initial try) delayed by .25, .5, 1, 2 and 5
seconds. Removed the retry logic from the lltexturefetch module.
Upped the waiting time in the unit test for the retries. People
won't like this but tough, need tests.
|
|
|
|
|
|
|
|
|
|
who want to try IPV6 can still override at will using CURLOPT_IPRESOLVE.
|
|
|
|
|
|
|
|
now avoiding doing HTTP fetches for read data. Not certain it's
completely correct but the difference is already significant.
|
|
Went through all the code and tried to document lock and thread usage
in the module. There's a huge comment block introducing all of this
at the beginning and I believe it's correct (though not quite complete).
Keep it updated, people. Added a new state, WAIT_HTTP_RESOURCE, that's
sort of a side-state of SEND_HTTP_REQ. If we hit a high-water mark
for HTTP requests, the extra are shunted to the new state once. Once
levels fall to a low-water mark, we run through a wait list of UUIDs,
sort the valid ones by priority and release them for service. This
keeps the HTTP layer busy while leaving the active queue shallow enough
that requests can still be re-prioritzed cheaply. Priority model
changed. The new state uses the PRIORITY_LOW mask, the old users
of _LOW are now at PRIORITY_NORMAL and sleepers woken up after an
external event are kicked off at PRIORITY_HIGH. This combination
along with the new state should avoid priority inversion and keep
things running without resorting to an infinite pipeline. New
state displays as "HTW" with green text in the texture console.
Request cancelation and worker run-down should now be more
correct but this edge case may need more attention.
|
|
|
|
|