Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
these manifest as crashes in isDynamic, isBox and similar calls that are accessed through a dangling probe.
|
|
C++20 support anyway
|
|
|
|
Streamline and robustify lua_emplace<T>() object cleanup.
|
|
|
|
|
|
outfit items
|
|
|
|
|
|
# Conflicts:
# indra/newview/llvoicechannel.cpp
|
|
|
|
Remove AutorunLuaScriptFile and the LLLUAmanager::runScriptOnLogin() method
that checked it.
Instead, iterate over LuaAutorunPath directories at viewer startup, iterate
over *.lua files in each and implicitly run those.
LuaCommandPath and LuaRequirePath are not yet implemented.
|
|
make the system ram function align across all supported platforms.
Taken from https://github.com/FirestormViewer/phoenix-firestorm/commit/3b074ba4af5e303125db606dd69eb4282a91f957
+ clean up FS specific comment markers and upstream code retention
|
|
|
|
This is redundant (but harmless) on a Posix system, but it fills a missing
puzzle piece on Windows. The point of fsyspath is to be able to interchange
freely between fsyspath and std::string. Existing fsyspath could be
constructed and assigned from std::string, and we could explicitly call its
string() method to get a std::string, but an implicit fsyspath-to-string
conversion that worked on Posix would trip us up on Windows. Fix that.
|
|
|
|
This replaces type_tag<T>(), which searched and possibly extended the type_tags
unordered_map at runtime. If we called lua_emplace<T>() from different threads,
that would require locking type_tags.
In contrast, the compiler must instantiate a distinct TypeTag<T> for every
distinct T passed to lua_emplace<T>(), so each gets a distinct value at static
initialization time. No locking is required; no lookup; no allocations.
Add a test to llluamanager_test.cpp to verify that each distinct T passed to
lua_emplace<T>() gets its own TypeTag<T>::value, and that each gets its own
destructor -- but that different lua_emplace<T>() calls with the same T share
the same TypeTag<T>::value and the same destructor.
|
|
It turns out that Luau does not honor PUC-Rio Lua's __gc metafunction, so
despite elaborate measures, the previous lua_emplace<T>() implementation would
not have destroyed the contained C++ T object when the resulting userdata
object was garbage-collected.
Moreover, using LL.atexit() as the mechanism to destroy lua_emplace<T>()
userdata objects (e.g. LuaListener) would have been slightly fragile because
we also want to use LL.atexit() to make the final fiber.run() call, when
appropriate. Introducing an order dependency between fiber.run() and the
LuaListener destructor would not be robust.
Both of those problems are addressed by leveraging one of Luau's extensions
over PUC-Rio Lua. A Luau userdata object can have an int tag; and a tag can
have an associated C++ destructor function. When any userdata object bearing
that tag is garbage-collected, Luau will call that destructor; and Luau's
lua_close() function destroys all userdata objects.
The resulting lua_emplace<T>() and lua_toclass<T>() code is far simpler.
It only remains to generate a distinct int tag value for each different C++
type passed to the lua_emplace<T>() template.
unordered_map<std::type_index, int> addresses that need.
|
|
Setting LOGTEST=DEBUG, when many unit/integration tests must be rebuilt and
run, can result in lots of unnecessary output. When we only want DEBUG log
output from a specific test program, make test.cpp recognize an environment
variable LOGTEST_testname, where 'testname' might be the full basename of the
executable, or part of INTEGRATION_TEST_testname or PROJECT_foo_TEST_testname.
When test.cpp notices a non-empty variable by that name, it behaves as if
LOGTEST were set to that value.
|
|
* #1836 Texture memory usage overhaul. Much decrufting
- don't keep a copy of textures in system memory
- use GPU to downrez textures instead of reloading from cache
- use GPU to generate brightness/darkness bumpmaps
|
|
Various crash fixes. ALso fixes an issue with adhoc/group calls on vivox regions.
|
|
|
|
roxie/webrtc-voice-crash-fixes
|
|
There was an issue on the release grid where old-style credentials
were being sent over and the webrtc viewer wasn't dealing with them properly.
|
|
Increase texture discard bias if system memory gets low
|
|
|
|
secondlife/1771-mesh-objects-do-not-display-until-you-walk-directly-over-them
#1771 Fix for objects disappearing and not reappearing until LoD switch
|
|
|
|
|
|
viewer#1821 Crash at getSessionID()
|
|
|
|
|
|
|
|
|
|
|
|
Previously, there were two places audio gain could be controlled:
- the device manager
- the audio track
The device manager audio gain control sets the system gain for all applications,
not just the webrtc application.
The audio track gain happens well after the audio processing where we want it to happen.
So, gain control was added to the existing custom audio processor, which previously only
handled calculating and retrieving the audio levels.
After these changes, the microphone gain slider does impact the audio volume heard by peers.
|
|
|
|
|
|
roxie/webrtc-voice-1451
|
|
|
|
UI-related Lua API work
|
|
|
|
Fix some BOOL oversights during WebRTC merge
|
|
Fix texture fetch request getting canceled if request counter flips over
|
|
|
|
|