Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
For the time being we're still compiling for production with C++03. Although
assigning an initializer list to a vector is valid C++11, in C++03 mode clang
rejects it.
|
|
|
|
LLInstanceTracker<T> performs validation in ~LLInstanceTracker(). Normally
validation failure logs an error and terminates the program, which is fine. In
the test executable, though, we want validation failure to throw an exception
instead so we can catch it and continue testing other failure conditions. But
since destructors in C++11 are implicitly noexcept(true), that exception never
made it out of ~LLInstanceTracker(): it crashed the test program instead.
Declaring ~LLInstanceTracker() noexcept(false) solves that, allowing the test
program to catch the exception and continue.
However, if we unconditionally declare that, then every destructor anywhere in
the inheritance hierarchy for any LLInstanceTracker subclass must also be
noexcept(false)! That's way too pervasive, especially for functionality we
only need (or want) in a specific test executable.
Instead, make the CMake macros LL_ADD_PROJECT_UNIT_TESTS() and
LL_ADD_INTEGRATION_TEST() -- with which we define all viewer build-time tests
-- define two new command-line macros: LL_TEST=testname and LL_TEST_testname.
That way, preprocessor logic in a header file can detect whether it's being
compiled for production code or for a test executable.
(While at it, encapsulate in a new GET_OPT_SOURCE_FILE_PROPERTY() CMake macro
an ugly repetitive pattern. The builtin GET_SOURCE_FILE_PROPERTY() sets the
target variable to "NOTFOUND" -- rather than an empty string -- if the
specified property wasn't set. Every call to GET_SOURCE_FILE_PROPERTY() in
LL_ADD_PROJECT_UNIT_TESTS() was followed by a test for NOTFOUND and an
assignment to "". Wrap all that in a macro whose 'unset' value is "".)
Now llinstancetracker.h can detect when we're building the LLInstanceTracker
unit test executable, and *only then* declare ~LLInstanceTracker() as
noexcept(false). We #define LLINSTANCETRACKER_DTOR_NOEXCEPT to expand either
empty or noexcept(false), also detecting clang in C++11 mode. (It all works
fine without noexcept(false) until we turn on C++11 mode.)
We also use that macro for the StatBase class in lltrace.h. Turns out some of
the infrastructure headers required for tests in general, including the
LLInstanceTracker test, use LLInstanceTracker. Fortunately that appears to be
the only other class we must annotate this way for the LLInstanceTracker tests.
|
|
|
|
Expand the way we set C++ flags in cmake to call out each build type explicitly
Approved-by: nat_linden <nat@lindenlab.com>
|
|
|
|
The LLMemory method getCurrentRSS() is defined to return the "resident set
size," but in fact on Windows it returns the WorkingSetSize -- and that's
actually what callers want from it: the total memory consumed by the
application for statistics purposes. It's not really clear what users gain by
knowing how much of that is resident in real memory, versus the total
consumption. So despite the commentation and the method name itself, on Mac
make it return the virtual size consumed.
|
|
|
|
|
|
|
|
|
|
|
|
This is only transitional, until we integrate the Viewer Management Process
(soon now).
|
|
to suppress fatal warnings in Visual Studio.
|
|
|
|
|
|
Whatever we were trying to do with LLSharedLibs.cmake hasn't worked on the Mac
for a long time, and trying to fix it only digs up more problems. Skip it:
we've already worked around it.
Update the media_plugins_example CMakeLists.txt to eliminate some CMake
non-existent dependency warnings.
|
|
changes for 2.6
|
|
|
|
|
|
|
|
|
|
|
|
Evidently the Mac implementation of LLMemory::getCurrentRSS() goes back to
OS X 10.3, because there was a helpful comment of the form:
------
The API used here is not capable of dealing with 64-bit memory sizes, but is
available before 10.4.
Once we start requiring 10.4, we can use the updated API, which looks like
this:
[new current implementation]
Of course, this doesn't gain us anything unless we start building the viewer
as a 64-bit executable, since that's the only way for our memory allocation to
exceed 2^32.
------
Hey, guess what, we're building 64-bit viewers now!
Thank you, whoever thoughtfully noted that, both for calling out the issue and
sparing us the research. (The comment goes back to Subversion days, so hg
blame shows only the merge-to-release changeset.)
|
|
There were two distinct LLMemory methods getCurrentRSS() and
getWorkingSetSize(). It was pointless to have both: on Windows they were
completely redundant; on other platforms getWorkingSetSize() always returned
0. (Amusingly, though the Windows implementations both made exactly the same
GetProcessMemoryInfo() call and used exactly the same logic, the code was
different in the two -- as though the second was implemented without awareness
of the first, even though they were adjacent in the source file.)
One of the actual MAINT-6996 problems was due to the fact that
getWorkingSetSize() returned U32, where getCurrentRSS() returns U64. In other
words, getWorkingSetSize() was both useless *and* wrong. Remove it, and change
its one call to getCurrentRSS() instead.
The other culprit was that in several places, the 64-bit WorkingSetSize
returned by the Windows GetProcessMemoryInfo() call (and by getCurrentRSS())
was explicitly cast to a 32-bit data type. That works only when explicitly or
implicitly (using LLUnits type conversion) scaling the value to kilobytes or
megabytes. When the size in bytes is desired, use 64-bit types instead.
In addition to the symptoms, LLMemory was overdue for a bit of cleanup.
There was a 16K block of memory called reserveMem, the comment on which read:
"reserve 16K for out of memory error handling." Yet *nothing* was ever done
with that block! If it were going to be useful, one would think someone would
at some point explicitly free the block. In fact there was a method
freeReserve(), apparently for just that purpose -- which was never called. As
things stood, reserveMem served only to *prevent* the viewer from ever using
that chunk of memory. Remove reserveMem and the unused freeReserve().
The only function of initClass() and cleanupClass() was to allocate and free
reserveMem. Remove initClass(), cleanupClass() and the LLCommon calls to them.
In a similar vein, there was an LLMemoryInfo::getPhysicalMemoryClamped()
method that returned U32Bytes. Its job was simply to return a size in bytes
that could fit into a U32 data type, returning U32_MAX if the 64-bit value
exceeded 4GB. Eliminate that; change all its calls to getPhysicalMemoryKB()
(which getPhysicalMemoryClamped() used internally anyway). We no longer care
about any platform that cannot handle 64-bit data types.
|
|
|
|
argument to pass by value
|
|
|
|
away (different region even) sometimes plays at maximum volume when entering a region or moving camera slightly.' - until we can understand how to make real mac_volume_catcher work
|
|
reduce the size of potential memory spikes
|
|
|
|
|
|
even) sometimes plays at maximum volume when entering a region or moving camera slightly.
|
|
|
|
of LLSafeHandle's referenced type. Using LLSingleton gives us a well-defined
time at which the "null instance" is deleted: LLSingletonBase::deleteAll().
|
|
|
|
LLWinDebug, though an LLSingleton, had (and required explicit calls to)
special init() and cleanup() methods. Kitty Barnett points out that the
cleanup() method was actually being called after LLSingletonBase::deleteAll(),
requiring resurrection of the deleted LLWinDebug, which sometimes led to
crashes. (Resurrecting deleted LLSingletons is always suspect.)
Change LLWinDebug::init() and cleanup() to the conventional initSingleton()
and cleanupSingleton() methods. This eliminates the need to make special
method calls at all. In particular, cleanupSingleton() will be called by the
existing LLSingletonBase::cleanupAll() call near viewer shutdown.
We retain the early LLWinDebug::instance() call, which implicitly initializes
the LLWinDebug instance, because evidently we want that initialized early. But
we no longer require a separate init() call.
|
|
The comment indicates that calling LLSingletonBase::deleteAll() is optional
because the LLSingleton machinery implicitly calls that during final
static-object cleanup. That is no longer true.
|
|
|
|
|
|
The recent LLSingleton work added a hook that would run during the C++
runtime's final destruction of static objects. When the LAST LLSingleton in
any module was destroyed, its destructor would call
LLSingletonBase::deleteAll(). That mechanism was intended to permit an
application consuming LLSingletons to skip making an explicit deleteAll()
call, knowing that all instantiated LLSingleton instances would eventually be
cleaned up anyway.
However -- experience proves that kicking off deleteAll() processing during
the C++ runtime's final cleanup is too late. Too much has already been
destroyed. That call tends to cause more shutdown crashes than it resolves.
This commit deletes that whole mechanism. Going forward, if you want to clean
up LLSingleton instances, you must explicitly call
LLSingletonBase::deleteAll() during the application lifetime. If you don't,
LLSingleton instances will simply be leaked -- which might be okay,
considering the application is terminating anyway.
|
|
|
|
|
|
|