Age | Commit message (Collapse) | Author |
|
and llunittype.h for now
|
|
LF, and trim trailing whitespaces as needed
|
|
|
|
|
|
|
|
|
|
GetTexture and GetMesh2 at a pipeline depth of 5. Create
global debug option, HttpPipelining, to enable and disable
HTTP pipelining (defaults to true). Tweak texture and
mesh low- and high-water request levels based on pipelining
status and depth. Fixup texture console which was damaged
in a recent release. Split logging of the no-request
HTTP error case into two cases: one for missing URL in
HTTP request, one for HTTP request not created. A refactor
in llcorehttp is coming: I will be moving all libcurl-
using code into libcurl-specific modules.
|
|
|
|
the change made for MAINT-2347. Large transfers are still
10 minutes. Add/update to-do list and add some more info to
the FAQ in the Readme.
|
|
This really extended into the client-side request throttling.
Moved this from llmeshrepository (which doesn't really want
to do connection management) into llcorehttp. It's now a
class option with configurable rate. This still isn't the
right thing to do as it creates coupling between viewer
and services. When we get to pipelining, this notion becomes
invalid.
|
|
Much improved. Unified the global and class options into a single
option list. Implemented static and dynamic setting paths as much
as possible. Dynamic path does require packet/RPC but otherwise
there's near unification. Dynamic modes can't get values back yet
due to the response/notifier scheme but this doesn't bother me.
Flatten global and class options into simpler struct-like entities.
Setter/getter available on these when needed (external APIs) but code
can otherwise fiddle directly when it knows what to do. Much duplicated
options/state removed from HttpPolicy. Comments cleaned up. Threads
better described and consistently mentioned in API docs. Integration
test extended for 503 responses with Reply-After headers.
|
|
Mesh repo is using three policy classes now: one for
large objects, one for GetMesh2 regions, one for
GetMesh regions. It's also detecting the presence
of the cap and using the correct class. Class
initialization cleaned up significantly in llappcorehttp
using data-directed code. Pulled in the changes to
HttpHeader done for sunshine-internal then did a
refactoring pass on the header callback which now
uses a unified approach to clean up and deliver
header information to all interested parties. Added
support for using Retry-After header information on
503 retries.
|
|
|
|
handle duplication code. Reviewed by Kelly
|
|
Added second mesh class as well as an asset upload class.
Refactored initialization to use less code and more data to
cleanly get http started. Modified mesh to use the new
http class for large requests (>2MB for now). Added additional
timeout setting to llcorehttp to distinguish connection timeout
from transport timeout and are now using transport timeout
values for large asset downloads that may need more time.
|
|
|
|
Initial work completed on linux, moving over to windows to do debug
and refinement. This includes 5/6 handlers based on existing responders
and use of llcorehttp for the mesh header fetch.
|
|
|
|
Bumped the default retry limit up from 5 to 8 which gives up to
15 seconds more dwell time should the viewer get a 503 or other
recoverable error on access.
|
|
Add to-do list to _httpinternal.h to guide anyone who
wants to pitch in and help.
|
|
|
|
sort things out or use policy classes (eventually) to arrange low
and high priority traffic. Subjectively, I think this works better
in practice (as I haven't implemented a dynamic priority setter yet).
|
|
Think I have found the major factor that causes the Linksys WRT54G V5 to
fall over in testing scenarios: DNS. For some historical reason, we're
trying to use libcurl without any DNS caching. My implementation echoed
that and implemented it correctly and I was seeing a DNS request per request
on the wire. The existing implementation tries to do that and has bugs
because it is clearing caching DNS data querying only once every few
seconds. Once I started emulating the bug, comms through the WRT became
much, much more reliable.
|
|
|