Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# Conflicts:
# indra/llcommon/tests/llsdserialize_test.cpp
|
|
Newer C++ compilers have different semantics around LLSDArray's special copy
constructor, which was essential to proper LLSD nesting. In short, we can no
longer trust LLSDArray to behave correctly. Now that we have variadic
functions, get rid of LLSDArray and replace every reference with llsd::array().
|
|
|
|
|
|
|
|
# Conflicts:
# indra/cmake/CMakeLists.txt
# indra/llcommon/llsdserialize.cpp
# indra/llcommon/llsdserialize.h
# indra/llcommon/tests/llleap_test.cpp
# indra/newview/llfilepicker.h
# indra/newview/llfilepicker_mac.h
# indra/newview/llfilepicker_mac.mm
# indra/newview/skins/default/xui/en/strings.xml
|
|
|
|
# Conflicts:
# indra/cmake/CMakeLists.txt
# indra/newview/skins/default/xui/es/floater_tools.xml
|
|
|
|
|
|
For work queues that don't need timestamped tasks, eliminate the overhead of a
priority queue ordered by timestamp. Timestamped task support moves to
WorkSchedule. WorkQueue is a simpler queue that just waits for work.
Both WorkQueue and WorkSchedule can be accessed via new WorkQueueBase API. Of
course the WorkQueueBase API doesn't deal with timestamps, but a WorkSchedule
can be accessed directly to post timestamped tasks and then handled normally
(e.g. by ThreadPool) to run them.
Most ThreadPool functionality migrates to new ThreadPoolBase class, with
template subclass ThreadPoolUsing<WorkQueue> or ThreadPoolUsing<WorkSchedule>
depending on need. ThreadPool is now an alias for ThreadPoolUsing<WorkQueue>.
Importantly, ThreadPoolUsing::getQueue() delivers a reference to the specific
queue subclass type, so you can post timestamped tasks on a queue retrieved
from ThreadPoolUsing<WorkSchedule>::getQueue().
Since ThreadPool is no longer a simple class but an alias for a particular
template specialization, introduce threadpool_fwd.h to forward-declare it.
Recast workqueue_test.cpp to exercise WorkSchedule, since some of the tests
are time-based. A future todo would be to exercise each applicable test with
both WorkQueue and WorkSchedule.
|
|
|
|
|
|
When sending multiple LEAP packets in the same file (for testing convenience),
use a length prefix instead of delimiting with '\n'. Now that we allow a
serialization format that includes an LLSD format header (e.g.
"<?llsd/binary?>"), '\n' is part of the packet content. But in fact, testing
binary LLSD means we can't pick any delimiter guaranteed not to appear in the
packet content.
Using a length prefix also lets us pass a specific max_bytes to the subject
C++ LLSD parser.
Make llleap_test.cpp use new freestanding Python llsd package when available.
Update Python-side LEAP protocol code to work directly with encoded bytes
stream, avoiding bytes<->str encoding and decoding, which breaks binary LLSD.
Make LLSDSerialize::deserialize() recognize LLSD format header case-
insensitively. Python emits and checks for "llsd/binary", while LLSDSerialize
emits and checks for "LLSD/Binary". Once any of the headers is recognized,
pass corrected max_bytes to the specific parser.
Make deserialize() more careful about the no-header case: preserve '\n' in
content. Introduce debugging code (disabled) because it's a little tricky to
recreate.
Revert LLLeap child process stdout parser from LLSDSerialize::deserialize() to
the specific LLSDNotationParser(), as at present: the generic parser fails one
of LLLeap's integration tests for reasons that remain mysterious.
|
|
Since parsing binary LLSD is faster than parsing notation LLSD, send data from
the viewer to the LEAP plugin child process's stdin in binary instead of
notation.
Similarly, instead of parsing the child process's stdout using specifically a
notation parser, use the generic LLSDSerialize::deserialize() LLSD parser.
Add more LLSDSerialize Python compatibility tests.
|
|
Absent a header from LLSDSerialize::serialize(), make deserialize()
distinguish between XML or notation by recognizing an initial '<'.
|
|
LLSDSerialize::serialize() emits a header string, e.g. "<? llsd/notation ?>"
for notation format. Until now, LLSDSerialize::deserialize() has required that
header to properly decode the input stream.
But none of LLSDBinaryFormatter, LLSDXMLFormatter or LLSDNotationFormatter
emit that header themselves. Nor do any of the Python llsd.format_binary(),
format_xml() or format_notation() functions. Until now, you could not use
LLSD::deserialize() to parse an arbitrary-format LLSD stream serialized by
anything but LLSDSerialize::serialize().
Change LLSDSerialize::deserialize() so that if no header is recognized,
instead of failing, it attempts to parse as notation. Add tests to exercise
this case.
The tricky part about this processing is that deserialize() necessarily reads
some number of bytes from the input stream first, to try to recognize the
header. If it fails to do so, it must prepend the bytes it has already read to
the rest of the input stream since they're probably the beginning of the
serialized data.
To support this use case, introduce cat_streambuf, a std::streambuf subclass
that (virtually) concatenates other std::streambuf instances. When read by a
std::istream, the sequence of underlying std::streambufs appears to the
consumer as a single continuous stream.
|
|
|
|
|
|
|
|
|
|
Always search for python3[.exe] instead of plain 'python'. macOS Monterey no
longer bundles Python 2 at all.
Explicitly make PYTHON_EXECUTABLE a cached value so if the user edits it in
CMakeCache.txt, it won't be overwritten by indra/cmake/Python.cmake.
Do NOT set DYLD_LIBRARY_PATH for test executables! That has Bad Effects, as
discussed in https://stackoverflow.com/q/73418423/5533635. Instead, create
symlinks from build-mumble/sharedlibs/Resources -> Release/Resources and from
build-mumble/test/Resources -> ../sharedlibs/Release/Resources. For test
executables in sharedlibs/RelWithDebInfo and test/RelWithDebInfo, this
supports our dylibs' baked-in load path @executable_path/../Resources. That
load path assumes running in a standard app bundle (which the viewer in fact
does), but we've been avoiding creating an app bundle for every test program.
These symlinks allow us to continue doing that while avoiding
DYLD_LIBRARY_PATH.
Add indra/llcommon/apply.h. The LL::apply() function and its wrapper macro
VAPPLY were very useful in diagnosing the problem.
Tweak llleap_test.cpp. This source was modified extensively for diagnostic
purposes; these are the small improvements that remain.
|
|
|
|
# Conflicts:
# autobuild.xml
# indra/cmake/LLCommon.cmake
# indra/llcommon/CMakeLists.txt
# indra/llrender/llgl.cpp
# indra/newview/llappviewer.cpp
# indra/newview/llface.cpp
# indra/newview/llflexibleobject.cpp
# indra/newview/llvovolume.cpp
|
|
|
|
# Conflicts:
# autobuild.xml
# doc/contributions.txt
# indra/cmake/GLOD.cmake
# indra/llcommon/tests/llprocess_test.cpp
# indra/newview/VIEWER_VERSION.txt
# indra/newview/lldrawpoolavatar.cpp
# indra/newview/llfloatermodelpreview.cpp
# indra/newview/llmodelpreview.cpp
# indra/newview/llviewertexturelist.cpp
# indra/newview/llvovolume.cpp
# indra/newview/viewer_manifest.py
|
|
# Conflicts:
# autobuild.xml
# doc/contributions.txt
# indra/cmake/GLOD.cmake
# indra/llcommon/tests/llprocess_test.cpp
# indra/newview/VIEWER_VERSION.txt
# indra/newview/lldrawpoolavatar.cpp
# indra/newview/llfloatermodelpreview.cpp
# indra/newview/llmodelpreview.cpp
# indra/newview/llviewertexturelist.cpp
# indra/newview/llvovolume.cpp
# indra/newview/viewer_manifest.py
|
|
|
|
This changeset makes it possible to build the Second Life viewer using
Python 3. It is designed to be used with an equivalent Autobuild branch
so that a developer can compile without needing Python 2 on their
machine.
Breaking change: Python 2 support ending
Rather than supporting two versions of Python, including one that was
discontinued at the beginning of the year, this branch focuses on
pouring future effort into Python 3 only. As a result, scripts do not
need to be backwards compatible. This means that build environments,
be they on personal computers and on build agents, need to have a
compatible interpreter.
Notes
- SLVersionChecker will still use Python 2 on macOS
- Fixed the message template url used by template_verifier.py
|
|
Turns out that one of our WorkQueue integration tests was relying on the
incorrect runFor() behavior that we just fixed, so the test broke. Now that
runFor() doesn't wait around for work to be posted, use an explicit wait loop
instead.
To support this, add LLCond::get(functor), where functor must accept a const
reference to the stored data. This new get() returns whatever the functor
returns, allowing a caller to peek at the stored data.
Also use universal references for all remaining LLCond functor arguments.
|
|
Reverting a merge is sticky: it tells git you never want to see that branch
again. Merging the DRTVWR-546 branch, which contained the revert, into the
glthread branch undid much of the development work on that branch. To restore
it we must revert the revert.
This reverts commit 029b41c0419e975bbb28454538b46dc69ce5d2ba.
|
|
This reverts commit 5188a26a8521251dda07ac0140bb129f28417e49, reversing
changes made to 819088563e13f1d75e048311fbaf0df4a79b7e19.
|
|
|
|
|
|
Add a test exercising this feature.
|
|
DRTVWR-546
|
|
Also make workqueue_test.cpp more robust.
|
|
A typical WorkQueue has a string name, which can be used to find it to post
work to it. "Work" is a nullary callable.
WorkQueue is a multi-producer, multi-consumer thread-safe queue: multiple
threads can service the WorkQueue, multiple threads can post work to it.
Work can be scheduled in the future by submitting with a timestamp. In
addition, a given work item can be scheduled to run on a recurring basis.
A requesting thread servicing a WorkQueue of its own, such as the viewer's
main thread, can submit work to another WorkQueue along with a callback to be
passed the result (of arbitrary type) of the first work item. The callback is
posted to the originating WorkQueue, permitting safe data exchange between
participating threads.
Methods are provided for different kinds of servicing threads. runUntilClose()
is useful for a simple worker thread. runFor(duration) devotes no more than a
specified time slice to that WorkQueue, e.g. for use by the main thread.
|
|
|
|
ThreadSafeSchedule::tryPopUntil() (and therefore tryPopFor()) was simply
delegating to LLThreadSafeQueue::tryPopUntil(), with an adjusted timeout since
we want to wake up as soon as the head item, if any, becomes ready. But then
we have to loop back to retry the pop to actually deal with that head item.
In addition, ThreadSafeSchedule::popWithTime() was spinning rather than
properly blocking on a timed condition variable. Fixed.
|
|
ThreadSafeSchedule orders its items by timestamp, which can be passed either
implicitly or explicitly. The timestamp specifies earliest delivery time: an
item cannot be popped until that time.
Add initial tests.
Tweak the LLThreadSafeQueue base class to support ThreadSafeSchedule:
introduce virtual canPop() method to report whether the current head item is
available to pop. The base class unconditionally says yes, ThreadSafeSchedule
says it depends on whether its timestamp is still in the future.
This replaces the protected pop_() overload accepting a predicate. Rather than
explicitly passing a predicate through a couple levels of function call, use
canPop() at the level it matters. Runtime behavior that varies depending on
an object's leaf class is what virtual functions were invented for.
Give pop_() a three-state enum return so pop() can distinguish between "closed
and empty" (throws exception) versus "closed, not yet drained because we're
not yet ready to pop the head item" (waits).
Also break out protected tryPopUntil_() method, the body logic of
tryPopUntil(). The public method locks the data structure, the protected
method requires that its caller has already done so.
Add chrono.h with a more full-featured LL::time_point_cast() function than the
one found in <chrono>, which only converts between time_point durations, not
between time_points based on different clocks.
|