Age | Commit message (Collapse) | Author |
|
mCoroWaitList covers all assets not just landmarks
|
|
Looks like pollTick tried to call an already dead process
|
|
|
|
mCoroWaitList was introduced to prevent an assertion failure crash:
LLCoprocedureManager never expects to fill LLCoprocedurePool::mPendingCoprocs
queue. The queue limit was arbitrarily set to 4096 some years ago, but in
practice LLViewerAssetStorage can post way more requests than that.
LLViewerAssetStorage checked whether the target LLCoprocedureManager pool's
queue looked close to full, and if so posted the pending request to its
mCoroWaitList instead. But then it had to override the base LLAssetStorage
method checkForTimeouts() to continually check whether pending tasks could be
moved from mCoroWaitList to LLCoprocedureManager.
A simpler solution is to enlarge LLCorpocedureManager::DEFAULT_QUEUE_SIZE, the
upper limit on mPendingCoprocs. Since mCoroWaitList was an unlimited queue,
making DEFAULT_QUEUE_SIZE "very large" does not increase the risk of runaway
memory consumption.
|
|
|
|
|
|
|
|
This is a fix for: https://jira.secondlife.com/browse/BUG-230616
|
|
|
|
|
|
The unsigned index arithmetic was problematic in that case.
|
|
|
|
Since LLSDSerialize::SIZE_UNLIMITED is negative, passing that through unsigned
size_t parameters could result in peculiar behavior.
|
|
and use it to replace dubious loops in asLLSD() and trimEmpty().
|
|
|
|
|
|
|
|
|
|
When sending multiple LEAP packets in the same file (for testing convenience),
use a length prefix instead of delimiting with '\n'. Now that we allow a
serialization format that includes an LLSD format header (e.g.
"<?llsd/binary?>"), '\n' is part of the packet content. But in fact, testing
binary LLSD means we can't pick any delimiter guaranteed not to appear in the
packet content.
Using a length prefix also lets us pass a specific max_bytes to the subject
C++ LLSD parser.
Make llleap_test.cpp use new freestanding Python llsd package when available.
Update Python-side LEAP protocol code to work directly with encoded bytes
stream, avoiding bytes<->str encoding and decoding, which breaks binary LLSD.
Make LLSDSerialize::deserialize() recognize LLSD format header case-
insensitively. Python emits and checks for "llsd/binary", while LLSDSerialize
emits and checks for "LLSD/Binary". Once any of the headers is recognized,
pass corrected max_bytes to the specific parser.
Make deserialize() more careful about the no-header case: preserve '\n' in
content. Introduce debugging code (disabled) because it's a little tricky to
recreate.
Revert LLLeap child process stdout parser from LLSDSerialize::deserialize() to
the specific LLSDNotationParser(), as at present: the generic parser fails one
of LLLeap's integration tests for reasons that remain mysterious.
|
|
|
|
|
|
before trying to create symlink.
|
|
Since parsing binary LLSD is faster than parsing notation LLSD, send data from
the viewer to the LEAP plugin child process's stdin in binary instead of
notation.
Similarly, instead of parsing the child process's stdout using specifically a
notation parser, use the generic LLSDSerialize::deserialize() LLSD parser.
Add more LLSDSerialize Python compatibility tests.
|
|
Cleaner reinit and termination.
|
|
|
|
|
|
Absent a header from LLSDSerialize::serialize(), make deserialize()
distinguish between XML or notation by recognizing an initial '<'.
|
|
|
|
|
|
|
|
LLSDSerialize::serialize() emits a header string, e.g. "<? llsd/notation ?>"
for notation format. Until now, LLSDSerialize::deserialize() has required that
header to properly decode the input stream.
But none of LLSDBinaryFormatter, LLSDXMLFormatter or LLSDNotationFormatter
emit that header themselves. Nor do any of the Python llsd.format_binary(),
format_xml() or format_notation() functions. Until now, you could not use
LLSD::deserialize() to parse an arbitrary-format LLSD stream serialized by
anything but LLSDSerialize::serialize().
Change LLSDSerialize::deserialize() so that if no header is recognized,
instead of failing, it attempts to parse as notation. Add tests to exercise
this case.
The tricky part about this processing is that deserialize() necessarily reads
some number of bytes from the input stream first, to try to recognize the
header. If it fails to do so, it must prepend the bytes it has already read to
the rest of the input stream since they're probably the beginning of the
serialized data.
To support this use case, introduce cat_streambuf, a std::streambuf subclass
that (virtually) concatenates other std::streambuf instances. When read by a
std::istream, the sequence of underlying std::streambufs appears to the
consumer as a single continuous stream.
|
|
|
|
There might be other causes for sendRenderInfoToRegion and getRenderInfoFromRegion, crashing, but in some cases viewer was shutting down
|
|
|
|
Close stale PRs
|
|
LLViewerTexture::mNeedsCreateTexture needs to be an attomic bool since
it is written both in the main thread and in the GL image worker thread.
We can now enable threaded bump maps creation as a result of this fix.
I have read the CLA Document and I hereby sign the CLA
|
|
Fix a thread safety issue in the GL image worker.
|
|
FPE_NOOP at "idx = (idx + 1 ) % (S32)mTabList.size();"
|
|
|
|
Update message template URL after move to GitHub
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
LLViewerTexture::mNeedsCreateTexture needs to be an attomic bool since
it is written both in the main thread and in the GL image worker thread.
We can now enable threaded bump maps creation as a result of this fix.
I have read the CLA Document and I hereby sign the CLA
|
|
|
|
Distance detail ctrl; update slider text correctly
|