Age | Commit message (Collapse) | Author |
|
|
|
|
|
point to duplicated code. Replaced hard-coded tcmalloc link option with variable that is created in GooglePerfTools.cmake.
|
|
|
|
will run with higher connection concurrencies. I'm using this to
test the listener queue length reporting on apaches and everything
is consistent and as expected with this change (stuck at eight before).
|
|
Bumped the default retry limit up from 5 to 8 which gives up to
15 seconds more dwell time should the viewer get a 503 or other
recoverable error on access.
|
|
Given that third-party libraries (such as Boost) can and do use those names,
properly namespace-scoped, it's unpardonable to break any such innocent usage
with a macro. Given the pervasiveness of the need, introduce a header file
with the requisite #undef directives.
|
|
|
|
Cmake files not merged correctly and had to be done by hand. New memory
allocation made some memory usage tests in the llcorehttp integration
tests no longer valid. Would like to work on LLLog sometime and get
it to be consistent. Special flags needed for windows build of example
program.
|
|
Reformatted messages around request retry. Successfully retried requests
also message so you can see the cycle closed. Added additional retryable
error codes (timeout, other libcurl failures). Commenting and removed some
unnecessary std::min logic.
|
|
|
|
Add to-do list to _httpinternal.h to guide anyone who
wants to pitch in and help.
|
|
|
|
Define expectations for headers for GET, POST, PUT requests.
Document those in the interface, test those with integration tests.
Verify that header overrides work as expected.
|
|
When releasing HTTP waiters, avoid unnecessary sort activity.
For Content-Type in responses, let libcurl do the work and removed
my parsing of headers. Drop Content-Encoding as libcurl will deal
with that. If anyone is interested, they can parse.
|
|
First round of integration tests. Added a request header 'reflector'
to the web server to sent the client's headers back with a 'X-Reflect-'
prefix. Use boost::regex to check various headers. Run a test on
a simple GET and a byte-ranged GET a la texture fetch.
|
|
Using http_texture_load as the test subject, library looks clean. Did
some better shutdown in the program itself and it looks better. Libcurl
itself is making a lot of noise. Adapted testrunner to run valgrind as
well but the memory allocation tester in the tools themselves grossly
interferes with Valgrind operations.
|
|
HttpResponse object now has two strings for these content headers.
Either or both may be empty. Tidied up the cross-platform string
code and got more defensive about the length of a header line.
Integration test for the new response object.
|
|
Well, achieved that by doing work in bulk when needed. But
turned into some additional things. Change timebase from
mS to uS as, well, things are headed that way. Implement
an HttpReplyQueue::fetchAll method (advertised one, hadn't
implemented it).
|
|
30-second hang doesn't break subsequent tests. Did this by
introducing threads into the HTTP server as I can't find the magic
to detect that my client has gone away.
|
|
First, try to issue ranged GETs that are always at least partially
satisfiable. This will keep Varnish-type caches from simply sending
back 200/full asset responses to unsatisfiable requests. Implement
awareness of Content-Range headers as well. Currently they're not
coming back but they will be someday.
|
|
Also added some comments and changed the callback userdata argument
to be an HttpOpRequest rather than a libcurl handle. Less code,
less clutter.
|
|
|
|
sort things out or use policy classes (eventually) to arrange low
and high priority traffic. Subjectively, I think this works better
in practice (as I haven't implemented a dynamic priority setter yet).
|
|
Think I have found the major factor that causes the Linksys WRT54G V5 to
fall over in testing scenarios: DNS. For some historical reason, we're
trying to use libcurl without any DNS caching. My implementation echoed
that and implemented it correctly and I was seeing a DNS request per request
on the wire. The existing implementation tries to do that and has bugs
because it is clearing caching DNS data querying only once every few
seconds. Once I started emulating the bug, comms through the WRT became
much, much more reliable.
|
|
|
|
Data problems after connections are established should be retried as well.
Extend to appropriate libcurl codes. Also allow our connectivity to drop
to as low as a single connection when trying to recover.
|
|
The fetch state machine received a new timeout during the WAIT_HTTP_REQ
state. For the integration, rather than jump the state to done, we issue
a request cancel and let the notification plumbing do the rest without
any race conditions or special-case logic.
|
|
Big delta was converting the new texture debugger support code
to the new library. Viewer manifest should probably get an eyeball
before release.
|
|
aggressive shutdown of a thread.
Some additional work let me enable a memory check for the clean shutdown case and
generally do a better job on other interfaces. Request queue waiters now awake
on shutdown and don't sleep once the queue is turned off. Much better semantically
for how this will be used.
|
|
in library.
With this commit, the cleanup paths should be production quality. Unit tests have been
expanded to include cases requiring thread termination and cleanup by the worker thread.
Special operation/request added to support the unit tests. Thread interface expanded
to include a very aggressive cancel() method that does not do cleanup but prevents the
thread from accessing objects that will be destroyed.
|
|
Groundwork is used for the default class which currently represents
texture fetching. Class options implemented from API user into
HttpLibcurl. Policy layer is going to start doing some traffic
shaping like work to solve problems with consumer-grade gear.
Need to have dynamic aspects to policies and that starts now...
|
|
Seems to be working correctly. Not certain this is the fastest possible way
to provide a std::streambuf interface but it's visually acceptable.
|
|
|
|
|
|
|
|
Initial version that should have enough of the plumbing to produce
a working adapter. Memory test is showing 8 bytes held after one
of the tests so I'm going to revisit that later. But basic
functionality is there going by the unit tests.
|
|
Only thing interesting in this changeset is the discovery that a sleep
in the fake HTTP server ties up tests. Need to thread that or fail on
client disconnect or something to speed that up and make it usable for
bigger test scenarios. But good enough for now...
|
|
<offset, length, fulllength>.
|
|
|
|
Pretty straightforward. Still don't like how I'm managing
the options block. Struct? Accessors? Can't decide. But
the options now speed up the unit test runs even as I add
tests.
|
|
|
|
now.
|
|
|
|
Beefed up the metrics gathering in http_texture_load to get memory sizes and
cpu consumption on windows (still need to implement that on Mac & linux).
Ran runs with various idle loops with sleeps from 20 ms down to pure spinning,
varied Block allocation size from 1504 to 2^20 bytes. 2ms/2ms/65540 appears
to be a good spot under the test conditions (Win7, danu grid, client in Boston).
|
|
|
|
numbers.
This is a command-line utility to pull content down from a service through
the llcorehttp library to produce timings and resource footprints.
|
|
LLProxy support, HttpOptions starting to work, HTTP resource waiting fixed.
Non-LLThread-based threads need to do some registration or LLMutex locks taken out in these
threads will not work as expected (SH-3154). We'll get a better solution later, this fixes
some things for now. Tracing of operations now supported. Global and per-request (via
HttpOptions) tracing levels of [0..3]. The 2 and 3 levels use libcurl's VERBOSE mode
combined with CURLOPT_DEBUGFUNCTION to stream high levels of detail into the log. *Very*
laggy but useful. Simple GET request supported (no Range: header). Really just a
degenrate case of a ranged get but supplied an API anyway. Global option to use the
LLProxy interface to setup CURL handles for either socks5 or http proxy usage. This
isn't really the most encapsulated way to do this but a better solution will have to
come later. The wantHeaders and tracing options are now supported in HttpOptions giving
per-request controls. Big refactoring of the HTTP resource waiter in lltexturefetch.
What I was doing before wasn't correct. Instead, I'm implementing the resource wait
after the Semaphore model (though not using system semaphores). So instead of having
a sequence like: SEND_HTTP_REQ -> WAIT_HTTP_RESOURCE -> SEND_HTTP_REQ, we now
do WAIT_HTTP_RESOURCE -> WAIT_HTTP_RESOURCE2 (actual wait) -> SEND_HTTP_REQ. Works
well but the prioritized filling of the corehttp library needs some performance
work later.
|
|
For now, workaround...
|
|
Implemented/modified PUT & POST to not used chunked encoding for the request.
Made the unit test much happier and probably a better thing for the pipeline.
Have a cheesy static & dynamic proxy capability using both local options and
a way to wire into LLProxy in llmessages. Not a clean thing but it will get
the proxy path working with both socks5 & http proxies. Refactoring to get
rid of unneeded library handler and unified an HttpStatus return for all
requests. Big batch of code removed as a result of that and more is possible
as well as some syscall avoidance with a bit more work. Boosted the unit
tests for simple PUT & POST test which revealed the test harness does *not*
like chunked encoding so we'll avoid it for now (and don't really need it
in any of our schemes).
|