Age | Commit message (Collapse) | Author |
|
|
|
The intention is to decentralize Luau entry points into our C++ code,
permitting a given entry point to be added to the .cpp file that already deals
with that class or functional area. Continuing to add every such entry point
to llluamanager.cpp doesn't scale well.
Extract LuaListener class from llluamanager.cpp to its own header and .cpp
file.
Extract from llluamanager into lua_function.h (and .cpp) declarations useful
for adding a lua_function Luau entry point, e.g.:
lua_register()
lua_rawlen()
lua_tostdstring()
lua_pushstdstring()
lua_tollsd()
lua_pushllsd()
LuaPopper
lua_function() and LuaFunction class
LuaState
lua_what
lua_stack
DebugExit
|
|
# Conflicts:
# indra/newview/fonts/DejaVu-license.txt
# indra/newview/fonts/DejaVuSans-Bold.ttf
# indra/newview/fonts/DejaVuSans-BoldOblique.ttf
# indra/newview/fonts/DejaVuSans-Oblique.ttf
# indra/newview/fonts/DejaVuSans.ttf
# indra/newview/fonts/DejaVuSansMono.ttf
|
|
|
|
# Conflicts:
# indra/llcommon/CMakeLists.txt
# indra/newview/llspatialpartition.cpp
# indra/newview/llviewergenericmessage.cpp
# indra/newview/llvoavatar.cpp
|
|
We define a specialization of LLSDParam<const char*> to support passing an
LLSD object to a const char* function parameter. Needless to remark, passing
object.asString().c_str() would be Bad: destroying the temporary std::string
returned by asString() would immediately invalidate the pointer returned by
its c_str(). But when you pass LLSDParam<const char*>(object) as the
parameter, that specialization itself stores the std::string so the c_str()
pointer remains valid as long as the LLSDParam object does.
Then there's LLSDParam<LLSD>, used when we don't have the parameter type
available to select the LLSDParam specialization. LLSDParam<LLSD> defines a
templated conversion operator T() that constructs an LLSDParam<T> to provide
the actual parameter value. So far, so good.
The trouble was with the implementation of LLSDParam<LLSD>: it constructed a
_temporary_ LLSDParam<T>, implicitly called its operator T() and immediately
destroyed it. Destroying LLSDParam<const char*> destroyed its stored string,
thus invalidating the c_str() pointer before the target function was entered.
Instead, make LLSDParam<LLSD>::operator T() capture each LLSDParam<T> it
constructs, extending its lifespan to the lifespan of the LLSDParam<LLSD>
instance. For this, derive each LLSDParam specialization from LLSDParamBase, a
trivial base class that simply establishes the virtual destructor. We can then
capture any specialization as a pointer to LLSDParamBase.
Also restore LazyEventAPI tests on Mac.
|
|
There's a limit to how much time it's worth trying to work around a compiler
bug that's already been fixed in newer Xcode.
|
|
|
|
Major improvements to LLLeap functionality
|
|
|
|
Add LL::always_return<T>(), which takes a callable and variadic arguments. It
calls the callable with those arguments and, if the returned type is
convertible to T, converts it and returns it. Otherwise it returns T().
always_return() is generalized from, and supersedes,
LLEventDispatcher::ReturnLLSD.
Add LL::function_arity<CALLABLE>, which extends
boost::function_types::function_arity by reporting results for both
std::function<CALLABLE> and boost::function<CALLABLE>. Use for
LL::apply(function, LLSD array) as well as for LLEventDispatcher.
Make LLEventDispatcher::add() overloads uniformly distinguish between a
callable (whether non-static member function or otherwise) that accepts a
single LLSD parameter, versus any other signature. Accepting exactly one LLSD
parameter signals that the callable will accept the composite arguments LLSD
blob, instead of asking LLEventDispatcher to unpack the arguments blob into
individual arguments.
Support add(subclass method) overloads for arbitrary-parameters methods as
well as for (const LLSD&) methods. Update tests accordingly: we need no longer
pass the boilerplate lambda instance getter that binds and returns 'this'.
Extract to the two LLEventDispatcher::make_invoker() overloads the LL::apply()
logic formerly found in ReturnLLSD.
Change lleventdispatcher_test.cpp tests from boost::bind(), which accepts
variadic arguments (even though it only passes a fixed set to the target
callable), to fixed-signature lambdas. This is because the revamped add()
overloads care about signature.
Add a test for a non-static method that accepts (const LLSD&), in other words
the composite arguments LLSD blob, and likewise returns LLSD.
(cherry picked from commit 95b787f7d7226ee9de79dfc9816f33c8bf199aad)
|
|
While calling a C++ function with arguments taken from a runtime-variable data
structure necessarily involves a bit of hocus-pocus, the best you can say for
the boost::fusion based implementation is that it worked. Sadly, template
recursion limited its applicability to a handful of function arguments. Now
that we have LL::apply(), use that instead. This implementation is much more
straightforward.
In particular, the LLSDArgsSource class, whose job was to dole out elements of
an LLSD array one at a time for the template recursion, goes away entirely.
Make virtual LLEventDispatcher::DispatchEntry::call() return LLSD instead of
void. All LLEventDispatcher target functions so far have been void; any
function that wants to respond to its invoker must do so explicitly by calling
sendReply() or constructing an LLEventAPI::Response instance. Supporting non-
void functions permits LLEventDispatcher to respond implicitly with the
returned value. Of course this requires a wrapper for void target functions
that returns LLSD::isUndefined().
Break out LLEventDispatcher::reply() from callFail(), so we can reply with
success as well as failure.
Make LLEventDispatcher::try_call_log() prepend the actual leaf class name and
description to any error returned by three-arg try_call(). That try_call()
overload reported "LLEventDispatcher(desc): " for a couple specific errors,
but no others. Hoist to try_call_log() to apply uniformly.
Introduce new try_call_one() method to diagnose name-not-found errors and
catch internal DispatchError and LL::apply_error exceptions. try_call_one()
returns a std::pair, containing either an error message or an LLSD value.
Make try_call_log() and three-arg try_call() accept LLSD 'name' instead of
plain std::string, allowing for the possibility of an array or map. That lets
us extend three-arg try_call() to break out new cases for the function selector
LLSD: isUndefined(), isArray(), isMap() and (current case) scalar String.
If try_call_one() reports an error, log it and try to send reply, as now. If
it returns LLSD::isUndefined(), e.g. from a void target function wrapper, do
nothing. But if it returns an LLSD map, try to send that back to the invoker.
And if it returns an LLSD scalar or array, wrap it in a map with key "data" to
respond to the invoker. Allowing a target function to return its result rather
than explicitly sending it opens the possibility of batched requests
(aggregate 'name') returning batched responses.
Almost every place that constructs LLEventDispatcher's internal DispatchError
exception called stringize() to format the what() string. Simplify calls by
making DispatchError accept variadic arguments and forward to stringize().
Add LL::invoke() to apply.h. Like LL::apply(), this is a (limited) C++14
foreshadowing of std::invoke(), with preprocessor conditionals to switch to
std::invoke() when that's available. Introduce LL::invoke() to handle a
callable that's actually a pointer to method.
Now our C++14 apply() implementation can accept pointer to method, using
invoke() to generalize the actual function call.
Also anticipate std::bind_front() with LL::bind_front(). For apply(func,
std::array) and our extensions apply(func, std::vector) and apply(func, LLSD),
we can't pass a pointer to method as the func unless the second argument
happens to be an array or vector of pointers (or references) to instances of
exactly the right class -- and of course LLSD can't store such at all. It's
tempting to pass std::bind(std::mem_fn(ptr_to_method), instance), but that
won't work: std::bind() requires a value or placeholder for each argument to
pass to the bound function. The bind() expression above would only work for a
nullary method. std::bind_front() would work, but that doesn't arrive until
C++20. Again, once we get there we'll defer to the std:: implementation.
Instead of the generic __cplusplus, check the appropriate feature-test macro
for availability of each of std::invoke(), std::apply() and std::bind_front().
Change apply() error handling from assert() to new LL::apply_error exception.
LLEventDispatcher must be able to intercept apply() errors. Move validation
and synthesis of the relevant error message to new apply.cpp source file.
Add to llptrto.h new LL::get_ref() and LL::get_ptr() template functions to
unify the cases of a calling template accepting either a pointer or a
reference. Wrapping the parameter in either get_ref() or get_ptr() allows
dereferencing the parameter as desired.
Move LL::apply(function, LLSD) argument validation/manipulation to a non-
template function in llsdutil.cpp: no need to replicate that logic in the
template for every CALLABLE specialization.
The trouble with passing bind_front(std::mem_fn(ptr_to_method), instance) to
apply() is that since bind_front() accepts and forwards variadic additional
arguments, apply() can't infer the arity of the bound ptr_to_method. Address
that by introducing apply_n<arity>(function, LLSD), permitting a caller to
infer the arity of ptr_to_method and explicitly pass it to apply_n().
Polish up lleventdispatcher_test.cpp accordingly. Wrong LLSD type and wrong
number of arguments now produce different (somewhat more informative) error
messages. Moreover, passing too many entries in an LLSD array used to work:
the extra arguments used to be ignored. Now we require that the size of the
array match the arity of the target function. Change the too-many-arguments
tests from success testing to error testing.
Replace 'foreach' aka BOOST_FOREACH macro invocations with range 'for'.
Replace STRINGIZE(item0 << item1 << ...) with stringize(item0, item1, ...).
(cherry picked from commit 9c049563b5480bb7e8ed87d9313822595b479c3b)
|
|
Make apply(function, std::array) and apply(function, std::vector) available
even when we borrow the C++17 implementation of apply(function, std::tuple).
Add apply(function, LLSD) with interpretations:
* isUndefined() is treated as an empty array, for calling a nullary function
* scalar LLSD is treated as a single-entry array, for calling a unary function
* isArray() converts function parameters using LLSDParam
* isMap() is an error.
Add unit tests for all flavors of LL::apply().
(cherry picked from commit 3006c24251c6259d00df9e0f4f66b8a617e6026d)
|
|
Bring over part of the LLEventDispatcher work inspired by DRTVWR-558.
|
|
|
|
|
|
|
|
gigantic CMake patch. Sadly, my macOS box updated to Xcode14.3 overnight and that caused many warnings/errors with variables being initialized and then used but not in a way that affected anything.. Building on Xcode 14.3 also requires that MACOSX_DEPLOYMENT_TARGET be set to > 10.13. Waiting on a decision about that but checking this in in the meantime. Builds on macOS with appropriate build variables set for MACOSX_DEPLOYMENT_TARGET = 10.14 but not really expecting this to build in TC because (REDACTED). Windows version probably hopelessly broken - switching to that now.
|
|
|
|
# Conflicts:
# indra/cmake/CMakeLists.txt
# indra/newview/skins/default/xui/es/floater_tools.xml
|
|
speed matters. (#64)
This commit adds the HBXX64 and HBXX128 classes for use as a drop-in
replacement for the slow LLMD5 hashing class, where speed matters and
backward compatibility (with standard hashing algorithms) and/or
cryptographic hashing qualities are not required.
It also replaces LLMD5 with HBXX* in a few existing hot (well, ok, just
"warm" for some) paths meeting the above requirements, while paving the way for
future use cases, such as in the DRTVWR-559 and sibling branches where the slow
LLMD5 is used (e.g. to hash materials and vertex buffer cache entries), and
could be use such a (way) faster algorithm with very significant benefits and
no negative impact.
Here is the comment I added in indra/llcommon/hbxx.h:
// HBXXH* classes are to be used where speed matters and cryptographic quality
// is not required (no "one-way" guarantee, though they are likely not worst in
// this respect than MD5 which got busted and is now considered too weak). The
// xxHash code they are built upon is vectorized and about 50 times faster than
// MD5. A 64 bits hash class is also provided for when 128 bits of entropy are
// not needed. The hashes collision rate is similar to MD5's.
// See https://github.com/Cyan4973/xxHash#readme for details.
|
|
speed matters. (#64)
This commit adds the HBXX64 and HBXX128 classes for use as a drop-in
replacement for the slow LLMD5 hashing class, where speed matters and
backward compatibility (with standard hashing algorithms) and/or
cryptographic hashing qualities are not required.
It also replaces LLMD5 with HBXX* in a few existing hot (well, ok, just
"warm" for some) paths meeting the above requirements, while paving the way for
future use cases, such as in the DRTVWR-559 and sibling branches where the slow
LLMD5 is used (e.g. to hash materials and vertex buffer cache entries), and
could be use such a (way) faster algorithm with very significant benefits and
no negative impact.
Here is the comment I added in indra/llcommon/hbxx.h:
// HBXXH* classes are to be used where speed matters and cryptographic quality
// is not required (no "one-way" guarantee, though they are likely not worst in
// this respect than MD5 which got busted and is now considered too weak). The
// xxHash code they are built upon is vectorized and about 50 times faster than
// MD5. A 64 bits hash class is also provided for when 128 bits of entropy are
// not needed. The hashes collision rate is similar to MD5's.
// See https://github.com/Cyan4973/xxHash#readme for details.
|
|
|
|
|
|
|
|
For work queues that don't need timestamped tasks, eliminate the overhead of a
priority queue ordered by timestamp. Timestamped task support moves to
WorkSchedule. WorkQueue is a simpler queue that just waits for work.
Both WorkQueue and WorkSchedule can be accessed via new WorkQueueBase API. Of
course the WorkQueueBase API doesn't deal with timestamps, but a WorkSchedule
can be accessed directly to post timestamped tasks and then handled normally
(e.g. by ThreadPool) to run them.
Most ThreadPool functionality migrates to new ThreadPoolBase class, with
template subclass ThreadPoolUsing<WorkQueue> or ThreadPoolUsing<WorkSchedule>
depending on need. ThreadPool is now an alias for ThreadPoolUsing<WorkQueue>.
Importantly, ThreadPoolUsing::getQueue() delivers a reference to the specific
queue subclass type, so you can post timestamped tasks on a queue retrieved
from ThreadPoolUsing<WorkSchedule>::getQueue().
Since ThreadPool is no longer a simple class but an alias for a particular
template specialization, introduce threadpool_fwd.h to forward-declare it.
Recast workqueue_test.cpp to exercise WorkSchedule, since some of the tests
are time-based. A future todo would be to exercise each applicable test with
both WorkQueue and WorkSchedule.
|
|
Deriving your tracked class T from LLInstanceTracker<T> gives you
T::getInstance() et al. But what about a subclass S derived from T?
S::getInstance() still delivers a pointer to T, requiring explicit downcast.
And so on for other LLInstanceTracker methods.
Instead, derive S from LLInstanceTrackerSubclass<S, T>. This implies that S is
a grandchild class of T, but it also recasts the LLInstanceTracker methods to
deliver results for S rather than for T.
|
|
|
|
|
|
In theory it is fine to do that, in practice it does break gatekeeper in subtle ways
due to https://developer.apple.com/library/archive/technotes/tn2206/_index.html#//apple_ref/doc/uid/DTS40007919-CH1-TNTAG207
Having bugsplat linked to all executables results in executables with an embedded rpath that is invalid for Gatekeeper. Luckily
it shows this is in the worst possible way. The viewer cannot be started with a non helpful message of teh viewer being unable to
verified. While at the same time spctl and codesign both show no errors at all.
|
|
# Conflicts:
# indra/llrender/llgl.cpp
# indra/llrender/llrendertarget.cpp
# indra/newview/VIEWER_VERSION.txt
# indra/newview/app_settings/shaders/class1/deferred/materialF.glsl
# indra/newview/llfloaterpreference.cpp
# indra/newview/llviewercontrol.cpp
# indra/newview/llviewermenu.cpp
# indra/newview/llviewertexturelist.cpp
# indra/newview/llvovolume.cpp
|
|
|
|
|
|
|
|
|
|
According to bugsplat get_thread_recorder was null
Replaced apr based LLThreadLocalPointer with thread_local
|
|
# Conflicts:
# doc/contributions.txt
# indra/newview/llviewercontrol.cpp
|
|
|
|
LazyEventAPI is a registrar that implicitly instantiates some particular
LLEventAPI subclass on demand: that is, when LLEventPumps::obtain() tries to
find an LLEventPump by the registered name.
This leverages the new LLEventPumps::registerPumpFactory() machinery. Fix
registerPumpFactory() to adapt the passed PumpFactory to accept TypeFactory
parameters (two of which it ignores). Supplement it with
unregisterPumpFactory() to support LazyEventAPI instances with lifespans
shorter than the process -- which may be mostly test programs, but still a
hole worth closing. Similarly, add unregisterTypeFactory().
A LazyEventAPI subclass takes over responsibility for specifying the
LLEventAPI's name, desc, field, plus whatever add() calls will be needed to
register the LLEventAPI's operations. This is so we can (later) enhance
LLLeapListener to consult LazyEventAPI instances for not-yet-instantiated
LLEventAPI metadata, as well as enumerating existing LLEventAPI instances.
The trickiest part of this is capturing calls to the various
LLEventDispatcher::add() overloads in such a way that, when the LLEventAPI
subclass is eventually instantiated, we can replay them in the new instance.
LLEventAPI acquires a new protected constructor specifically for use by a
subclass registered by a companion LazyEventAPI. It accepts a const reference
to LazyEventAPIParams, intended to be opaque to the LLEventAPI subclass; the
subclass must declare a constructor that accepts and forwards the parameter
block to the new LLEventAPI constructor. The implementation delegates to the
existing LLEventAPI constructor, plus it runs deferred add() calls.
LLDispatchListener now derives from LLEventStream instead of containing it as
a data member. The reason is that if LLEventPumps::obtain() implicitly
instantiates it, LLEventPumps's destructor will try to destroy it by deleting
the LLEventPump*. If the LLEventPump returned by the factory function is a
data member of an outer class, that won't work so well. But if
LLDispatchListener (and by implication, LLEventAPI and any subclass) is
derived from LLEventPump, then the virtual destructor will Do The Right Thing.
Change LLDispatchListener to *not* allow tweaking the LLEventPump name. Since
the overwhelming use case for LLDispatchListener is LLEventAPI, accepting but
silently renaming an LLEventAPI subclass would ensure nobody could reach it.
Change LLEventDispatcher's use of std::enable_if to control the set of add()
overloads available for the intended use cases. Apparently this formulation is
just as functional at the method declaration point, while avoiding the need to
restate the whole enable_if expression at the method definition point.
Add lazyeventapi_test.cpp to exercise.
|
|
Introduce CommonControl, which in a running viewer (or any program containing
an LLViewerControlListener instance) gives access to LLViewerControl
functionality, e.g. getting, setting or enumerating control variables --
without introducing a link dependency on newview.
Make ThreadPool's constructor consult CommonControl to check for an override
for the width of the new ThreadPool in the Global (i.e. gSavedSettings)
setting ThreadPoolSizes, and honor that if found.
Introduce static ThreadPool methods getConfiguredWidth(), to query for such an
override on any particular ThreadPool name; and getWidth(), to ask for the
width of an instance if that instance already exists, else the width with
which it *would* be instantiated.
|
|
DRTVWR-543-maint_cmake
|
|
# Conflicts:
# autobuild.xml
# indra/cmake/LLCommon.cmake
# indra/llcommon/CMakeLists.txt
# indra/llrender/llgl.cpp
# indra/newview/llappviewer.cpp
# indra/newview/llface.cpp
# indra/newview/llflexibleobject.cpp
# indra/newview/llvovolume.cpp
|
|
|
|
sets the property on those.
|
|
All 3Ps include dirs are treated as SYSTEM, this will stop compilers
stop emitting warnings from those files and greatly helps having high
warning levels and not being swamped by warnings that come from
external libraries.
|
|
- Fix usage of bugsplat::bugsplat by using ll::bugsplat
- Use bugsplat define by importing target not by using hand crafted magic
|
|
|
|
compiled on.
This gets rid of the a few OS specific set and uses variables (which some even seemed mostly
duplicate like WINDOWS_LIBRARIES ans UI_LIBRARIES) and it also solves the problem of
having them to tack on every target, as of no they come as a transitive dependency from llcommon
|
|
with the same name (that's why 3ps had names like apr::apr),
but it's safer and saner to put the LL 3ps under the ll:: prefix.
This also allows means it is possible to get rid of that bad "if( TRAGET ...) return() endif()" pattern and rather use include_guard().
|
|
|