Age | Commit message (Collapse) | Author |
|
Big delta was converting the new texture debugger support code
to the new library. Viewer manifest should probably get an eyeball
before release.
|
|
Seems to be working correctly. Not certain this is the fastest possible way
to provide a std::streambuf interface but it's visually acceptable.
|
|
Only thing interesting in this changeset is the discovery that a sleep
in the fake HTTP server ties up tests. Need to thread that or fail on
client disconnect or something to speed that up and make it usable for
bigger test scenarios. But good enough for now...
|
|
<offset, length, fulllength>.
|
|
log on exit.
With much trial-and-error, cleaned up the banner on the texture console and made everything
mostly fit. Added global cache read, cache write and resource wait count events to the
console display to show if cache is working. On clean exit, emit a log line to report
stats to log file (intended for automated tests, maybe):
LLTextureFetch::endThread: CacheReads: 2618, CacheWrites: 117, ResWaits: 0, TotalHTTPReq: 117
|
|
LLProxy support, HttpOptions starting to work, HTTP resource waiting fixed.
Non-LLThread-based threads need to do some registration or LLMutex locks taken out in these
threads will not work as expected (SH-3154). We'll get a better solution later, this fixes
some things for now. Tracing of operations now supported. Global and per-request (via
HttpOptions) tracing levels of [0..3]. The 2 and 3 levels use libcurl's VERBOSE mode
combined with CURLOPT_DEBUGFUNCTION to stream high levels of detail into the log. *Very*
laggy but useful. Simple GET request supported (no Range: header). Really just a
degenrate case of a ranged get but supplied an API anyway. Global option to use the
LLProxy interface to setup CURL handles for either socks5 or http proxy usage. This
isn't really the most encapsulated way to do this but a better solution will have to
come later. The wantHeaders and tracing options are now supported in HttpOptions giving
per-request controls. Big refactoring of the HTTP resource waiter in lltexturefetch.
What I was doing before wasn't correct. Instead, I'm implementing the resource wait
after the Semaphore model (though not using system semaphores). So instead of having
a sequence like: SEND_HTTP_REQ -> WAIT_HTTP_RESOURCE -> SEND_HTTP_REQ, we now
do WAIT_HTTP_RESOURCE -> WAIT_HTTP_RESOURCE2 (actual wait) -> SEND_HTTP_REQ. Works
well but the prioritized filling of the corehttp library needs some performance
work later.
|
|
The NORMAL range doesn't do any sleeping at all and so we'll
spin the core harder than we already are. Bring all idlers
into the same range.
|
|
206/content-range hack in xport.
Retry/response handling is decided in policy so moved that there. Removed special case
206-without-content-range response in transport. Have this sitation recognizable in the
API and let callers deal with it as needed.
|
|
surprised me. Added a retry queue similar to ready queue to the
policy object which is sorted by retry time. Currently do five
retries (after the initial try) delayed by .25, .5, 1, 2 and 5
seconds. Removed the retry logic from the lltexturefetch module.
Upped the waiting time in the unit test for the retries. People
won't like this but tough, need tests.
|
|
|
|
now avoiding doing HTTP fetches for read data. Not certain it's
completely correct but the difference is already significant.
|
|
Went through all the code and tried to document lock and thread usage
in the module. There's a huge comment block introducing all of this
at the beginning and I believe it's correct (though not quite complete).
Keep it updated, people. Added a new state, WAIT_HTTP_RESOURCE, that's
sort of a side-state of SEND_HTTP_REQ. If we hit a high-water mark
for HTTP requests, the extra are shunted to the new state once. Once
levels fall to a low-water mark, we run through a wait list of UUIDs,
sort the valid ones by priority and release them for service. This
keeps the HTTP layer busy while leaving the active queue shallow enough
that requests can still be re-prioritzed cheaply. Priority model
changed. The new state uses the PRIORITY_LOW mask, the old users
of _LOW are now at PRIORITY_NORMAL and sleepers woken up after an
external event are kicked off at PRIORITY_HIGH. This combination
along with the new state should avoid priority inversion and keep
things running without resorting to an infinite pipeline. New
state displays as "HTW" with green text in the texture console.
Request cancelation and worker run-down should now be more
correct but this edge case may need more attention.
|
|
Implemented first global policy definitions to support SSL CA certificate configuration
to support https: operations. Fixed HTTP 206 status handling to match what is currently
being done by grid services and to lay a foundation for fixes that will be a response
to ER-1824. More libcurl CURLOPT options set on easy handles to do peer verification
in the traditional way. HTTP POST working and now reporting asset metrics back to
grid for the viewer's asset system. This uses LLSD so that is also showing as compatible
with the new library.
|
|
fixing priorities.
|
|
|
|
chunking data. Remove the stateful use of a seek pointer so
that shared read is possible (though maybe not interesting).
|
|
what normal requests do...
|
|
|
|
Identified and reacted to the priority inversion problem we
have in texturefetch. Includes the introduction of a priority_queue
for the requests that are ready. Start some parameterization in
anticipation of having policy_class everywhere. Removed _assert.h
which isn't really needed in indra codebase. Implemented async
setPriority request (which I hope I can get rid of eventually along
with all priorities in this library). Converted to using unsigned
int for priority rather than float. Implemented POST and did
groundwork for PUT.
|
|
|
|
This is the first functional viewer pass with the HTTP work of the texture fetch
code performed by the llcorehttp library. Not exactly a 'drop-in' replacement
but a work-alike with some changes (e.g. handler notification in consumer
thread versus responder notification in worker thread).
This also includes some temporary changes in the priority scheme to prevent
the kind of priority inversion found in VWR-28996. Scheme used here does
provide liveness if not optimal responsiveness or order-of-operation.
The llcorehttp library at this point is far from optimally performing.
Its worker thread is making relatively poor use of cycles it gets and
it doesn't idle or sleep intelligently yet. This early integration step
helps shake out the interfaces, implementation niceties will be covered
soon.
|
|
as in release
|
|
on and off.
|
|
|
|
|
|
|
|
|
|
HTTP when all objects loading are done.
|
|
As a viewer architect, I would like to understand how fast each of the components of the texture pipeline can run in isolation
|
|
|
|
|
|
Reviewed by Simon
|
|
|
|
|
|
|
|
read latency
|
|
occlusion queries from previous frame are still pending and perform texture decode work.
|
|
|
|
|
|
quit.
|
|
|
|
|
|
|
|
accessed through the static LLThread::tldata().
Currently this object contains two (public) thread-local
objects: a LLAPRRootPool and a LLVolatileAPRPool.
The first is the general memory pool used by this thread
(and this thread alone), while the second is intended
for short lived memory allocations (needed for APR).
The advantages of not mixing those two is that the latter
is used most frequently, and as a result of it's nature
can be destroyed and reconstructed on a "regular" basis.
This patch adds LLAPRPool (completely replacing the old one),
which is a wrapper around apr_pool_t* and has complete
thread-safity checking.
Whenever an apr call requires memory for some resource,
a memory pool in the form of an LLAPRPool object can
be created with the same life-time as this resource;
assuring clean up of the memory no sooner, but also
not much later than the life-time of the resource
that needs the memory.
Many, many function calls and constructors had the
pool parameter simply removed (it is no longer the
concern of the developer, if you don't write code
that actually does an libapr call then you are no
longer bothered with memory pools at all).
However, I kept the notion of short-lived and
long-lived allocations alive (see my remark in
the jira here: https://jira.secondlife.com/browse/STORM-864?focusedCommentId=235356&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-235356
which requires that the LLAPRFile API needs
to allow the user to specify how long they
think a file will stay open. By choosing
'short_lived' as default for the constructor
that immediately opens a file, the number of
instances where this needs to be specified is
drastically reduced however (obviously, any
automatic LLAPRFile is short lived).
***
Addressed Boroondas remarks in https://codereview.secondlife.com/r/99/
regarding (doxygen) comments. This patch effectively only changes comments.
Includes some 'merge' stuff that ended up in llvocache.cpp
(while starting as a bug fix, now only resulting in a cleanup).
***
Added comment 'The use of apr_pool_t is OK here'.
Added this comment on every line where apr_pool_t
is correctly being used.
This should make it easier to spot (future) errors
where someone started to use apr_pool_t; you can
just grep all sources for 'apr_pool_t' and immediately
see where it's being used while LLAPRPool should
have been used.
Note that merging this patch is very easy:
If there are no other uses of apr_pool_t in the code
(one grep) and it compiles, then it will work.
***
Second Merge (needed to remove 'delete mCreationMutex'
from LLImageDecodeThread::~LLImageDecodeThread).
***
Added back #include <apr_pools.h>.
Apparently that is needed on libapr version 1.2.8.,
the version used by Linden Lab, for calls to
apr_queue_*. This is a bug in libapr (we also
include <apr_queue.h>, that is fixed in (at least) 1.3.7.
Note that 1.2.8 is VERY old. Even 1.3.x is old.
***
License fixes (GPL -> LGPL). And typo in comments.
Addresses merov's comments on the review board.
***
Added Merov's compile fixes for windows.
|
|
|
|
Viewer attempting to load precached images in file types that are not being used.)
|
|
|
|
Linux startup crash appears to be due to static/global C++ init of LLAtomic
types. The initializer with explicit value makes some runtime calls and it
looks like these assume, at least on Linux, that apr_initialize() has been
called. So move the static POST count to a member and provide accessors
and increment/decrement. Command queue was built on a pointer to a class
in anonymous namespace and that's not quite valid. Made it a nested
class (really a nested forward declaration) while keeping the derived
classes in anonymous.
|
|
Legacy of the LLSD::Map-to-LLSD::Array conversion, this ended up
performing an erase on the array rather than the map taking out
all the regions. So, there *was* a metrics report, it was just empty
of regions. Fixed and scanned for more array/map problems and
corrected the data type for duration sorts (should have been Real).
|
|
|