Age | Commit message (Collapse) | Author |
|
(cherry picked from commit 374eb409b98795158b36e232f670d1302f31b9ff)
|
|
While calling a C++ function with arguments taken from a runtime-variable data
structure necessarily involves a bit of hocus-pocus, the best you can say for
the boost::fusion based implementation is that it worked. Sadly, template
recursion limited its applicability to a handful of function arguments. Now
that we have LL::apply(), use that instead. This implementation is much more
straightforward.
In particular, the LLSDArgsSource class, whose job was to dole out elements of
an LLSD array one at a time for the template recursion, goes away entirely.
Make virtual LLEventDispatcher::DispatchEntry::call() return LLSD instead of
void. All LLEventDispatcher target functions so far have been void; any
function that wants to respond to its invoker must do so explicitly by calling
sendReply() or constructing an LLEventAPI::Response instance. Supporting non-
void functions permits LLEventDispatcher to respond implicitly with the
returned value. Of course this requires a wrapper for void target functions
that returns LLSD::isUndefined().
Break out LLEventDispatcher::reply() from callFail(), so we can reply with
success as well as failure.
Make LLEventDispatcher::try_call_log() prepend the actual leaf class name and
description to any error returned by three-arg try_call(). That try_call()
overload reported "LLEventDispatcher(desc): " for a couple specific errors,
but no others. Hoist to try_call_log() to apply uniformly.
Introduce new try_call_one() method to diagnose name-not-found errors and
catch internal DispatchError and LL::apply_error exceptions. try_call_one()
returns a std::pair, containing either an error message or an LLSD value.
Make try_call_log() and three-arg try_call() accept LLSD 'name' instead of
plain std::string, allowing for the possibility of an array or map. That lets
us extend three-arg try_call() to break out new cases for the function selector
LLSD: isUndefined(), isArray(), isMap() and (current case) scalar String.
If try_call_one() reports an error, log it and try to send reply, as now. If
it returns LLSD::isUndefined(), e.g. from a void target function wrapper, do
nothing. But if it returns an LLSD map, try to send that back to the invoker.
And if it returns an LLSD scalar or array, wrap it in a map with key "data" to
respond to the invoker. Allowing a target function to return its result rather
than explicitly sending it opens the possibility of batched requests
(aggregate 'name') returning batched responses.
Almost every place that constructs LLEventDispatcher's internal DispatchError
exception called stringize() to format the what() string. Simplify calls by
making DispatchError accept variadic arguments and forward to stringize().
Add LL::invoke() to apply.h. Like LL::apply(), this is a (limited) C++14
foreshadowing of std::invoke(), with preprocessor conditionals to switch to
std::invoke() when that's available. Introduce LL::invoke() to handle a
callable that's actually a pointer to method.
Now our C++14 apply() implementation can accept pointer to method, using
invoke() to generalize the actual function call.
Also anticipate std::bind_front() with LL::bind_front(). For apply(func,
std::array) and our extensions apply(func, std::vector) and apply(func, LLSD),
we can't pass a pointer to method as the func unless the second argument
happens to be an array or vector of pointers (or references) to instances of
exactly the right class -- and of course LLSD can't store such at all. It's
tempting to pass std::bind(std::mem_fn(ptr_to_method), instance), but that
won't work: std::bind() requires a value or placeholder for each argument to
pass to the bound function. The bind() expression above would only work for a
nullary method. std::bind_front() would work, but that doesn't arrive until
C++20. Again, once we get there we'll defer to the std:: implementation.
Instead of the generic __cplusplus, check the appropriate feature-test macro
for availability of each of std::invoke(), std::apply() and std::bind_front().
Change apply() error handling from assert() to new LL::apply_error exception.
LLEventDispatcher must be able to intercept apply() errors. Move validation
and synthesis of the relevant error message to new apply.cpp source file.
Add to llptrto.h new LL::get_ref() and LL::get_ptr() template functions to
unify the cases of a calling template accepting either a pointer or a
reference. Wrapping the parameter in either get_ref() or get_ptr() allows
dereferencing the parameter as desired.
Move LL::apply(function, LLSD) argument validation/manipulation to a non-
template function in llsdutil.cpp: no need to replicate that logic in the
template for every CALLABLE specialization.
The trouble with passing bind_front(std::mem_fn(ptr_to_method), instance) to
apply() is that since bind_front() accepts and forwards variadic additional
arguments, apply() can't infer the arity of the bound ptr_to_method. Address
that by introducing apply_n<arity>(function, LLSD), permitting a caller to
infer the arity of ptr_to_method and explicitly pass it to apply_n().
Polish up lleventdispatcher_test.cpp accordingly. Wrong LLSD type and wrong
number of arguments now produce different (somewhat more informative) error
messages. Moreover, passing too many entries in an LLSD array used to work:
the extra arguments used to be ignored. Now we require that the size of the
array match the arity of the target function. Change the too-many-arguments
tests from success testing to error testing.
Replace 'foreach' aka BOOST_FOREACH macro invocations with range 'for'.
Replace STRINGIZE(item0 << item1 << ...) with stringize(item0, item1, ...).
(cherry picked from commit 9c049563b5480bb7e8ed87d9313822595b479c3b)
|
|
(cherry picked from commit 7d33e00d925614911a7602da1bd79916cc849ad7)
|
|
Add to apply_test.cpp a collect() function that incrementally accumulates an
arbitrary number of arguments into a std::vector<std::string>. Construct a
std::array<std::string> to pass it, using VAPPLY().
Clarify in header comments that LL::apply() can't call a variadic function
with arguments of dynamic size: std::vector or LLSD. The compiler can deduce
how many arguments to pass to a function with a fixed argument list; it can
deduce how many arguments to pass to a variadic function with a fixed number
of arguments. But it can't compile a call to a variadic function with an
arguments data structure whose size can vary at runtime.
(cherry picked from commit ceed33396266b123896f7cfb9b90abdf240e1eec)
|
|
Make apply(function, std::array) and apply(function, std::vector) available
even when we borrow the C++17 implementation of apply(function, std::tuple).
Add apply(function, LLSD) with interpretations:
* isUndefined() is treated as an empty array, for calling a nullary function
* scalar LLSD is treated as a single-entry array, for calling a unary function
* isArray() converts function parameters using LLSDParam
* isMap() is an error.
Add unit tests for all flavors of LL::apply().
(cherry picked from commit 3006c24251c6259d00df9e0f4f66b8a617e6026d)
|
|
Always search for python3[.exe] instead of plain 'python'. macOS Monterey no
longer bundles Python 2 at all.
Explicitly make PYTHON_EXECUTABLE a cached value so if the user edits it in
CMakeCache.txt, it won't be overwritten by indra/cmake/Python.cmake.
Do NOT set DYLD_LIBRARY_PATH for test executables! That has Bad Effects, as
discussed in https://stackoverflow.com/q/73418423/5533635. Instead, create
symlinks from build-mumble/sharedlibs/Resources -> Release/Resources and from
build-mumble/test/Resources -> ../sharedlibs/Release/Resources. For test
executables in sharedlibs/RelWithDebInfo and test/RelWithDebInfo, this
supports our dylibs' baked-in load path @executable_path/../Resources. That
load path assumes running in a standard app bundle (which the viewer in fact
does), but we've been avoiding creating an app bundle for every test program.
These symlinks allow us to continue doing that while avoiding
DYLD_LIBRARY_PATH.
Add indra/llcommon/apply.h. The LL::apply() function and its wrapper macro
VAPPLY were very useful in diagnosing the problem.
Tweak llleap_test.cpp. This source was modified extensively for diagnostic
purposes; these are the small improvements that remain.
(cherry picked from commit 15d37713b9113a6f70dde48c764df02c76e18cbc)
(cherry picked from commit a1adcf1905d1fbc5fe07ff5a627295ccfe461ac4)
|
|
Bring over part of the LLEventDispatcher work inspired by DRTVWR-558.
|
|
|
|
Newer C++ compilers have different semantics around LLSDArray's special copy
constructor, which was essential to proper LLSD nesting. In short, we can no
longer trust LLSDArray to behave correctly. Now that we have variadic
functions, get rid of LLSDArray and replace every reference with llsd::array().
|
|
|
|
# Conflicts:
# indra/cmake/CMakeLists.txt
# indra/newview/skins/default/xui/es/floater_tools.xml
|
|
|
|
|
|
|
|
|
|
Always search for python3[.exe] instead of plain 'python'. macOS Monterey no
longer bundles Python 2 at all.
Explicitly make PYTHON_EXECUTABLE a cached value so if the user edits it in
CMakeCache.txt, it won't be overwritten by indra/cmake/Python.cmake.
Do NOT set DYLD_LIBRARY_PATH for test executables! That has Bad Effects, as
discussed in https://stackoverflow.com/q/73418423/5533635. Instead, create
symlinks from build-mumble/sharedlibs/Resources -> Release/Resources and from
build-mumble/test/Resources -> ../sharedlibs/Release/Resources. For test
executables in sharedlibs/RelWithDebInfo and test/RelWithDebInfo, this
supports our dylibs' baked-in load path @executable_path/../Resources. That
load path assumes running in a standard app bundle (which the viewer in fact
does), but we've been avoiding creating an app bundle for every test program.
These symlinks allow us to continue doing that while avoiding
DYLD_LIBRARY_PATH.
Add indra/llcommon/apply.h. The LL::apply() function and its wrapper macro
VAPPLY were very useful in diagnosing the problem.
Tweak llleap_test.cpp. This source was modified extensively for diagnostic
purposes; these are the small improvements that remain.
|
|
|
|
One important factor in the design of LazyEventAPI was the desire to allow
LLLeapListener to query metadata for an LLEventAPI even if it hasn't yet been
instantiated by LazyEventAPI. That's why LazyEventAPI requires the same
metadata required by a classic LLEventAPI.
Instead of just publicly exposing its data members, give LazyEventAPI a query
API mimicking LLEventAPI / LLEventDispatcher. Protect data members and private
methods.
Adapt lazyeventapi_test.cpp accordingly.
Extend LLLeapListener::getAPIs() and getAPI() to look through LazyEventAPIBase
instances after first checking existing LLEventAPI instances. Because the
query API for LazyEventAPIBase mimics LLEventAPI's, extract getAPI()'s actual
metadata reporting to a new internal template function reportAPI().
While we're touching LLLeapListener, we no longer need BOOST_FOREACH().
|
|
A classic LLEventAPI subclass calls LLEventDispatcher::add() methods in its
own constructor. At that point, addMethod() can reliably dynamic_cast its
'this' pointer to the new subclass.
But because of the way LazyEventAPI queues up add() calls, they're invoked in
the (new) LLEventAPI constructor itself. The subclass constructor body hasn't
even started running, and LLEventDispatcher::addMethod()'s dynamic_cast to the
LLEventAPI subclass returns nullptr. addMethod() claims the new subclass isn't
derived from LLEventDispatcher, which is confusing since it is.
It works to change addMethod()'s dynamic_cast to static_cast.
Flesh out lazyeventapi_test.cpp. post() maps with "op" keys to actually try to
engage the registered operation. Give the operation an observable side effect;
use ensure_mumble() to verify. Also verify that LazyEventAPI has captured the
subject LLEventAPI's metadata in a way we can retrieve.
|
|
LazyEventAPI is a registrar that implicitly instantiates some particular
LLEventAPI subclass on demand: that is, when LLEventPumps::obtain() tries to
find an LLEventPump by the registered name.
This leverages the new LLEventPumps::registerPumpFactory() machinery. Fix
registerPumpFactory() to adapt the passed PumpFactory to accept TypeFactory
parameters (two of which it ignores). Supplement it with
unregisterPumpFactory() to support LazyEventAPI instances with lifespans
shorter than the process -- which may be mostly test programs, but still a
hole worth closing. Similarly, add unregisterTypeFactory().
A LazyEventAPI subclass takes over responsibility for specifying the
LLEventAPI's name, desc, field, plus whatever add() calls will be needed to
register the LLEventAPI's operations. This is so we can (later) enhance
LLLeapListener to consult LazyEventAPI instances for not-yet-instantiated
LLEventAPI metadata, as well as enumerating existing LLEventAPI instances.
The trickiest part of this is capturing calls to the various
LLEventDispatcher::add() overloads in such a way that, when the LLEventAPI
subclass is eventually instantiated, we can replay them in the new instance.
LLEventAPI acquires a new protected constructor specifically for use by a
subclass registered by a companion LazyEventAPI. It accepts a const reference
to LazyEventAPIParams, intended to be opaque to the LLEventAPI subclass; the
subclass must declare a constructor that accepts and forwards the parameter
block to the new LLEventAPI constructor. The implementation delegates to the
existing LLEventAPI constructor, plus it runs deferred add() calls.
LLDispatchListener now derives from LLEventStream instead of containing it as
a data member. The reason is that if LLEventPumps::obtain() implicitly
instantiates it, LLEventPumps's destructor will try to destroy it by deleting
the LLEventPump*. If the LLEventPump returned by the factory function is a
data member of an outer class, that won't work so well. But if
LLDispatchListener (and by implication, LLEventAPI and any subclass) is
derived from LLEventPump, then the virtual destructor will Do The Right Thing.
Change LLDispatchListener to *not* allow tweaking the LLEventPump name. Since
the overwhelming use case for LLDispatchListener is LLEventAPI, accepting but
silently renaming an LLEventAPI subclass would ensure nobody could reach it.
Change LLEventDispatcher's use of std::enable_if to control the set of add()
overloads available for the intended use cases. Apparently this formulation is
just as functional at the method declaration point, while avoiding the need to
restate the whole enable_if expression at the method definition point.
Add lazyeventapi_test.cpp to exercise.
|
|
Originally the LLEventAPI mechanism was primarily used for VITA testing. In
that case it was okay for the viewer to crash with LL_ERRS if the test script
passed a bad request.
With puppetry, hopefully new LEAP scripts will be written to engage
LLEventAPIs in all sorts of interesting ways. Change error handling from
LL_ERRS to LL_WARNS. Furthermore, if the incoming request contains a "reply"
key, send back an error response to the requester.
Update lleventdispatcher_test.cpp accordingly.
(cherry picked from commit de0539fcbe815ceec2041ecc9981e3adf59f2806)
|
|
# Conflicts:
# autobuild.xml
# indra/cmake/LLCommon.cmake
# indra/llcommon/CMakeLists.txt
# indra/llrender/llgl.cpp
# indra/newview/llappviewer.cpp
# indra/newview/llface.cpp
# indra/newview/llflexibleobject.cpp
# indra/newview/llvovolume.cpp
|
|
|
|
# Conflicts:
# autobuild.xml
# doc/contributions.txt
# indra/cmake/GLOD.cmake
# indra/llcommon/tests/llprocess_test.cpp
# indra/newview/VIEWER_VERSION.txt
# indra/newview/lldrawpoolavatar.cpp
# indra/newview/llfloatermodelpreview.cpp
# indra/newview/llmodelpreview.cpp
# indra/newview/llviewertexturelist.cpp
# indra/newview/llvovolume.cpp
# indra/newview/viewer_manifest.py
|
|
# Conflicts:
# autobuild.xml
# doc/contributions.txt
# indra/cmake/GLOD.cmake
# indra/llcommon/tests/llprocess_test.cpp
# indra/newview/VIEWER_VERSION.txt
# indra/newview/lldrawpoolavatar.cpp
# indra/newview/llfloatermodelpreview.cpp
# indra/newview/llmodelpreview.cpp
# indra/newview/llviewertexturelist.cpp
# indra/newview/llvovolume.cpp
# indra/newview/viewer_manifest.py
|
|
|
|
This changeset makes it possible to build the Second Life viewer using
Python 3. It is designed to be used with an equivalent Autobuild branch
so that a developer can compile without needing Python 2 on their
machine.
Breaking change: Python 2 support ending
Rather than supporting two versions of Python, including one that was
discontinued at the beginning of the year, this branch focuses on
pouring future effort into Python 3 only. As a result, scripts do not
need to be backwards compatible. This means that build environments,
be they on personal computers and on build agents, need to have a
compatible interpreter.
Notes
- SLVersionChecker will still use Python 2 on macOS
- Fixed the message template url used by template_verifier.py
|
|
Turns out that one of our WorkQueue integration tests was relying on the
incorrect runFor() behavior that we just fixed, so the test broke. Now that
runFor() doesn't wait around for work to be posted, use an explicit wait loop
instead.
To support this, add LLCond::get(functor), where functor must accept a const
reference to the stored data. This new get() returns whatever the functor
returns, allowing a caller to peek at the stored data.
Also use universal references for all remaining LLCond functor arguments.
|
|
Reverting a merge is sticky: it tells git you never want to see that branch
again. Merging the DRTVWR-546 branch, which contained the revert, into the
glthread branch undid much of the development work on that branch. To restore
it we must revert the revert.
This reverts commit 029b41c0419e975bbb28454538b46dc69ce5d2ba.
|
|
This reverts commit 5188a26a8521251dda07ac0140bb129f28417e49, reversing
changes made to 819088563e13f1d75e048311fbaf0df4a79b7e19.
|
|
|
|
|
|
Add a test exercising this feature.
|
|
DRTVWR-546
|
|
Also make workqueue_test.cpp more robust.
|
|
A typical WorkQueue has a string name, which can be used to find it to post
work to it. "Work" is a nullary callable.
WorkQueue is a multi-producer, multi-consumer thread-safe queue: multiple
threads can service the WorkQueue, multiple threads can post work to it.
Work can be scheduled in the future by submitting with a timestamp. In
addition, a given work item can be scheduled to run on a recurring basis.
A requesting thread servicing a WorkQueue of its own, such as the viewer's
main thread, can submit work to another WorkQueue along with a callback to be
passed the result (of arbitrary type) of the first work item. The callback is
posted to the originating WorkQueue, permitting safe data exchange between
participating threads.
Methods are provided for different kinds of servicing threads. runUntilClose()
is useful for a simple worker thread. runFor(duration) devotes no more than a
specified time slice to that WorkQueue, e.g. for use by the main thread.
|
|
|
|
ThreadSafeSchedule::tryPopUntil() (and therefore tryPopFor()) was simply
delegating to LLThreadSafeQueue::tryPopUntil(), with an adjusted timeout since
we want to wake up as soon as the head item, if any, becomes ready. But then
we have to loop back to retry the pop to actually deal with that head item.
In addition, ThreadSafeSchedule::popWithTime() was spinning rather than
properly blocking on a timed condition variable. Fixed.
|
|
ThreadSafeSchedule orders its items by timestamp, which can be passed either
implicitly or explicitly. The timestamp specifies earliest delivery time: an
item cannot be popped until that time.
Add initial tests.
Tweak the LLThreadSafeQueue base class to support ThreadSafeSchedule:
introduce virtual canPop() method to report whether the current head item is
available to pop. The base class unconditionally says yes, ThreadSafeSchedule
says it depends on whether its timestamp is still in the future.
This replaces the protected pop_() overload accepting a predicate. Rather than
explicitly passing a predicate through a couple levels of function call, use
canPop() at the level it matters. Runtime behavior that varies depending on
an object's leaf class is what virtual functions were invented for.
Give pop_() a three-state enum return so pop() can distinguish between "closed
and empty" (throws exception) versus "closed, not yet drained because we're
not yet ready to pop the head item" (waits).
Also break out protected tryPopUntil_() method, the body logic of
tryPopUntil(). The public method locks the data structure, the protected
method requires that its caller has already done so.
Add chrono.h with a more full-featured LL::time_point_cast() function than the
one found in <chrono>, which only converts between time_point durations, not
between time_points based on different clocks.
|
|
These functions allow prepending or removing an item at the left end of an
arbitrary tuple -- for instance, to add a sequence key to a caller's data,
then remove it again when delivering the original tuple.
|
|
|
|
|
|
|
|
Introduce Oz's LLERROR_CRASH macro analogous to the old LLError::crashAndLoop()
function. Change LL_ENDL macro so that, after calling flush(), if the CallSite
is for LEVEL_ERROR, we invoke LLERROR_CRASH right there.
Change the meaning of LLError::FatalFunction. It used to be responsible for
the actual crash (hence crashAndLoop()). Now, instead, its role is to disrupt
control flow in some other way if you DON'T want to crash: throw an exception,
or call exit() or some such. Any FatalFunction that returns normally will fall
into the new crash in LL_ENDL.
Accordingly, the new default FatalFunction is a no-op lambda. This eliminates
the need to test for empty (not set) FatalFunction in Log::flush().
Remove LLError::crashAndLoop() because the official LL_ERRS crash is now in
LL_ENDL.
One of the two common use cases for setFatalFunction() used to be to intercept
control in the last moments before crashing -- not to crash or to avoid
crashing, but to capture the LL_ERRS message in some way. Especially when
that's temporary, though (e.g. LLLeap), saving and restoring the previous
FatalFunction only works when the lifespans of the relevant objects are
strictly LIFO.
Either way, that's a misuse of FatalFunction. Fortunately the Recorder
mechanism exactly addresses that case. Introduce a GenericRecorder template
subclass, with LLError::addGenericRecorder(callable) that accepts a callable
with suitable (level, message) signature, instantiates a GenericRecorder, adds
it to the logging machinery and returns the RecorderPtr for possible later use
with removeRecorder().
Change llappviewer.cpp's errorCallback() to an addGenericRecorder() callable.
Its role was simply to update gDebugInfo["FatalMessage"] with the LL_ERRS
message, then call writeDebugInfo(), before calling crashAndLoop() to finish
crashing. Remove the crashAndLoop() call, retaining the gDebugInfo logic. Pass
errorCallback() to LLError::addGenericRecorder() instead of setFatalFunction().
Oddly, errorCallback()'s crashAndLoop() call was conditional on a compile-time
SHADER_CRASH_NONFATAL symbol. The new mechanism provides no way to support
SHADER_CRASH_NONFATAL -- it is a Bad Idea to return normally from any LL_ERRS
invocation!
Rename LLLeapImpl::fatalFunction() to onError(). Instead of passing it to
LLError::setFatalFunction(), pass it to addGenericRecorder(). Capture the
returned RecorderPtr in mRecorder, replacing mPrevFatalFunction. Then
~LLLeapImpl() calls removeRecorder(mRecorder) instead of restoring
mPrevFatalFunction (which, as noted above, was order-sensitive).
Of course, every enabled Recorder is called with every log message. onError()
and errorCallback() must specifically test for calls with LEVEL_ERROR.
LLSingletonBase::logerrs() used to call LLError::getFatalFunction(), check the
return and call it if non-empty, else call LLError::crashAndLoop(). Replace
all that with LLERROR_CRASH.
Remove from llappviewer.cpp the watchdog_llerrs_callback() and
watchdog_killer_callback() functions. watchdog_killer_callback(), passed to
Watchdog::init(), used to setFatalFunction(watchdog_llerrs_callback) and then
invoke LL_ERRS() -- which seems a bit roundabout. watchdog_llerrs_callback(),
in turn, replicated much of the logic in the primary errorCallback() function
before replicating the crash from llwatchdog.cpp's default_killer_callback().
Instead, pass LLWatchdog::init() a lambda that invokes the LL_ERRS() message
formerly found in watchdog_killer_callback(). It no longer needs to override
FatalFunction with watchdog_llerrs_callback() because errorCallback() will
still be called as a Recorder, obviating watchdog_llerrs_callback()'s first
half; and LL_ENDL will handle the crash, obviating the second half.
Remove from llappviewer.cpp the static fast_exit() function, which was simply
an alias for _exit() acceptable to boost::bind(). Use a lambda directly
calling _exit() instead of using boost::bind() at all.
In the CaptureLog class in llcommon/tests/wrapllerrs.h, instead of statically
referencing the wouldHaveCrashed() function from test.cpp, simply save and
restore the current FatalFunction across the LLError::saveAndResetSettings()
call.
llerror_test.cpp calls setFatalFunction(fatalCall), where fatalCall() was a
function that simply set a fatalWasCalled bool rather than actually crashing
in any way. Of course, that implementation would now lead to crashing the test
program. Make fatalCall() throw a new FatalWasCalled exception. Introduce a
CATCH(LL_ERRS("tag"), "message") macro that expands to:
LL_ERRS("tag") << "message" << LL_ENDL;
within a try/catch block that catches FatalWasCalled and sets the same bool.
Change all existing LL_ERRS() in llerror_test.cpp to corresponding CATCH()
calls. In fact there's also an LL_DEBUGS(bad tag) invocation that exercises an
LL_ERRS internal to llerror.cpp; wrap that too.
|
|
|
|
LLSDNotationFormatter (also LLSDNotationStreamer that uses it, plus
operator<<(std::ostream&, const LLSD&) that uses LLSDNotationStreamer) is most
useful for displaying LLSD to a human, e.g. for logging. Having the default
dump raw binary bytes into the log file is not only suboptimal, it can
truncate the output if one of those bytes is '\0'. (This is a problem with the
logging subsystem, but that's a story for another day.)
Use OPTIONS_PRETTY_BINARY wherever there is a default LLSDFormatter
::EFormatterOptions argument.
Also, allow setting LLSDFormatter subclass boolalpha(), realFormat() and
format(options) using optional constructor arguments. Naturally, each subclass
that supports this must accept and forward these constructor arguments to its
LLSDFormatter base class constructor.
Fix a couple bugs in LLSDNotationFormatter::format_impl() for an LLSD::Binary
value with OPTIONS_PRETTY_BINARY:
- The code unconditionally emitted a b(len) type prefix followed by either raw
binary or hex, depending on the option flag. OPTIONS_PRETTY_BINARY caused it
to emit "0x" before the hex representation of the data. This is wrong in
that it can't be read back by either the C++ or the Python LLSD parser.
Correct OPTIONS_PRETTY_BINARY formatting consists of b16"hex digits" rather
than b(len)"raw bytes".
- Although the code did set hex mode, it didn't set either the field width or
the fill character, so that a byte value less than 16 would emit a single
digit rather than two.
Instead of having one LLSDFormatter::format() method with an optional options
argument, declare two overloads. The format() overload without options passes
the mOptions data member to the overload accepting options.
Refactor the LLSDFormatter family, hoisting the recursive format_impl() method
(accepting level) to a pure virtual method at LLSDFormatter base-class level.
Most subclasses therefore need not override either base-class format() method,
only format_impl(). In fact the short format() overload isn't even virtual.
Consistently use LLSDFormatter::EFormatterOptions enum as the options
parameter wherever such options are accepted.
|
|
|
|
llmainthreadtask_test builds in a Sync timeout to keep build-time tests from
hanging. That timeout was set to 2000ms, which seems as though it ought to be
plenty enough time for a process with only 2 threads to exchange data between
them. But on TeamCity EC2 Windows build hosts, sometimes we hit that timeout
and fail. Extend it to try to improve the robustness of builds, even though
the possibility of a production viewer blocking for that long for anything
seems worrisome. (Fortunately the production viewer does not use Sync.)
|
|
|
|
|