Age | Commit message (Collapse) | Author |
|
LF, and trim trailing whitespaces as needed
|
|
# Conflicts:
# autobuild.xml
# indra/llcommon/llsys.cpp
|
|
|
|
# Conflicts:
# .github/workflows/build.yaml
|
|
Closing window correctly caused a significant amount of logout freezes
with no known reproes. Temporarily returning to old behavior were thread
was killes without closing window and will reenable in later maints to
hopefully get a scenario or at least more data of what is causing the
freeze.
|
|
Under debug LL_ERRS will show a message as well, but release won't show
anything and will quit silently so show a notification when applicable.
|
|
|
|
# Conflicts:
# indra/newview/llinventorygallery.cpp
|
|
|
|
|
|
Note that crash happened when setting LLProgressView::setMessage
|
|
|
|
|
|
|
|
|
|
|
|
# Conflicts:
# indra/newview/llchiclet.h
|
|
|
|
UIImgInvisibleUUID doesn't exist
Default normal for material is 'null'
|
|
1. After window closes viewer still takes some time to shut down, so
added splash screen to not confuse users (and to see if something gets
stuck)
2. Having two identical mWindowHandle caused confusion for me, so I
split them. It looks like there might have been issues with thread being
stuck because thread's handle wasn't cleaned up.
3. Made region clean mCacheMap immediately instead of spending time
making copies on shutdown
|
|
|
|
a preset...' option of the 'Preferences' floater
|
|
|
|
|
|
coroutines).
|
|
|
|
|
|
|
|
# Conflicts:
# indra/newview/fonts/DejaVu-license.txt
# indra/newview/fonts/DejaVuSans-Bold.ttf
# indra/newview/fonts/DejaVuSans-BoldOblique.ttf
# indra/newview/fonts/DejaVuSans-Oblique.ttf
# indra/newview/fonts/DejaVuSans.ttf
# indra/newview/fonts/DejaVuSansMono.ttf
|
|
# Conflicts:
# indra/newview/llspatialpartition.cpp
|
|
|
|
# Conflicts:
# indra/newview/llinventorygallery.cpp
# indra/newview/skins/default/xui/en/notifications.xml
|
|
|
|
# Conflicts:
# indra/llrender/llgl.cpp
# indra/llrender/llvertexbuffer.cpp
# indra/llui/llflatlistview.cpp
# indra/newview/lldrawpoolground.cpp
# indra/newview/llspatialpartition.cpp
# indra/newview/lltexturefetch.cpp
# indra/newview/llviewergenericmessage.cpp
# indra/newview/llviewertexture.cpp
# indra/newview/llvosky.cpp
# indra/newview/skins/default/xui/en/floater_preferences_graphics_advanced.xml
# indra/newview/skins/default/xui/en/floater_stats.xml
# indra/newview/skins/default/xui/en/floater_texture_fetch_debugger.xml
# indra/newview/skins/default/xui/en/notifications.xml
# indra/newview/skins/default/xui/en/panel_performance_preferences.xml
|
|
# Conflicts:
# indra/llcommon/CMakeLists.txt
# indra/newview/llspatialpartition.cpp
# indra/newview/llviewergenericmessage.cpp
# indra/newview/llvoavatar.cpp
|
|
We actively use event pumps's connections in threads, make sure nothing
modifies list of connections during reset.
And in case this doesn't fix the issue list affected pump before it
crashes to have a better idea of what is going on.
|
|
|
|
by making it thread_local.
|
|
|
|
|
|
|
|
Now that we're building with C++17, we can use Class Template Argument
Deduction to infer the type passed to the constructor of the 'narrow' class.
We no longer require a narrow_holder class with a narrow() factory function.
|
|
With GitHub viewer builds, every few weeks we've seen test failures when
ll_frand() returns exactly 1.0. This is a problem for a function that's
supposed to return [0.0 .. 1.0).
Monty suggests that the problem is likely to be conversion of F32 to F64 to
pass to fmod(), and then truncation of fmod()'s F64 result back to F32. Moved
the clamping code to each size-specific ll_internal_random specialization.
Monty also noted that a stateful static random number engine isn't
thread-safe. Added a mutex lock.
|
|
using for DRTVWR-559
|
|
ensure inventory skeleton loading doesn't block the message system from processing packets.
|
|
On a Windows CI host, we got the dreaded rc 3221225725 aka c00000fd aka stack
overflow.
|
|
The test was coded to push (what's intended to be) the third entry with
timestamp (now + 200ms), then (what's intended to be) the second entry with
timestamp (now + 100ms).
The trouble is that it was re-querying "now" each time. On a slow CI host, the
clock might have advanced by more than 100ms between the first push and the
second -- meaning that the second push would actually have a _later_
timestamp, and thus, even with the queue sorting properly, fail the test's
order validation.
Capture the timestamp once, then add both time deltas to the same time point
to get the relative order right regardless of elapsed real time.
|
|
We define a specialization of LLSDParam<const char*> to support passing an
LLSD object to a const char* function parameter. Needless to remark, passing
object.asString().c_str() would be Bad: destroying the temporary std::string
returned by asString() would immediately invalidate the pointer returned by
its c_str(). But when you pass LLSDParam<const char*>(object) as the
parameter, that specialization itself stores the std::string so the c_str()
pointer remains valid as long as the LLSDParam object does.
Then there's LLSDParam<LLSD>, used when we don't have the parameter type
available to select the LLSDParam specialization. LLSDParam<LLSD> defines a
templated conversion operator T() that constructs an LLSDParam<T> to provide
the actual parameter value. So far, so good.
The trouble was with the implementation of LLSDParam<LLSD>: it constructed a
_temporary_ LLSDParam<T>, implicitly called its operator T() and immediately
destroyed it. Destroying LLSDParam<const char*> destroyed its stored string,
thus invalidating the c_str() pointer before the target function was entered.
Instead, make LLSDParam<LLSD>::operator T() capture each LLSDParam<T> it
constructs, extending its lifespan to the lifespan of the LLSDParam<LLSD>
instance. For this, derive each LLSDParam specialization from LLSDParamBase, a
trivial base class that simply establishes the virtual destructor. We can then
capture any specialization as a pointer to LLSDParamBase.
Also restore LazyEventAPI tests on Mac.
|
|
They do work fine on clang... unblocking the rest of the team during diagnosis.
|
|
|