Age | Commit message (Collapse) | Author |
|
On Windows, when logged in with a non-ASCII username, every one of the three
documented APIs -- SHGetSpecialFolderPath(), SHGetFolderPath() and
SHGetKnownFolderPath() -- fails to retrieve any pathname at all. We cannot
account for the fact that the oldest of these continues to work with the
release viewer and within a Python script (though not, curiously, from a
Python interactive session). With a non-ASCII username, they consistently fail
when called from an Alex Ivy viewer build: "The filename, directory name, or
volume label syntax is incorrect."
Empirically, with a non-ASCII username, the preset APPDATA and LOCALAPPDATA
environment variables are also useless, e.g. c:\Users\??????\AppData\Roaming
where those are, yup, actual question marks.
Empirically, the VMP is able to successfully call SHGetFolderPath() to
retrieve both AppData\Roaming and AppData\Local. Therefore, we make the VMP
set the APPDATA and LOCALAPPDATA environment variables to the UTF-8 encoded
correct pathnames. Instead of calling SHGetSomethingFolderPath() at all, make
LLDir_Win32 retrieve those environment variables.
Make LLFile::mkdir() treat "directory already exists" as a success case. Every
single call fell into one of two categories: either it didn't check success at
all, or it tested specially to exempt errno == EEXIST. Migrate that test into
mkdir(); eliminate it from call sites.
Make LLDir::append() and add() convenience functions accept variadic
arguments. Replace add(add()...) constructs, as well as clumsy concatenations
of directory names and getDirDelimiter(), with simple variadic add() calls.
|
|
|
|
|
|
|
|
|
|
|
|
codes from core.
|
|
in Windows 8 compatibility mode)
|
|
|
|
|
|
|
|
clipboard
|
|
|
|
|
|
folder.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The new behavior is that it will flush when either the pending batch has grown
to the specified size, or the time interval has expired.
|
|
|
|
For some reason there wasn't an entry in indra/llcommon/CMakeLists.txt to run
the tests in indra/llcommon/tests/lleventfilter_test.cpp. It seems likely that
at some point it existed, since all previous tests built and ran successfully.
In any case, (re-)add lleventfilter_test.cpp to the set of llcommon tests.
Also alphabetize them to make it easier to find a particular test invocation.
Also add new tests for LLEventThrottle.
To support this, refactor the concrete LLEventThrottle class into
LLEventThrottleBase containing all the tricky logic, with pure virtual
methods for access to LLTimer and LLEventTimeout, and an LLEventThrottle
subclass containing the LLTimer and LLEventTimeout instances and corresponding
implementations of the new pure virtual methods.
That permits us to introduce TestEventThrottle, an alternate subclass with
dummy implementations of the methods related to LLTimer and LLEventTimeout. In
particular, we can explicitly advance simulated realtime to simulate
particular LLTimer and LLEventTimeout behaviors.
Finally, introduce Concat, a test LLEventPump listener class whose function is
to concatenate received string event data into a composite string so we can
readily test for particular sequences of events.
|
|
Drake points out that the OS X 64-bit-capable memory-query APIs recommended in
comments by some long-ago maintainer are by now themselves obsolete. He
offered this patch to update us to current macOS memory APIs.
|
|
|
|
|
|
LLInstanceTracker<T> performs validation in ~LLInstanceTracker(). Normally
validation failure logs an error and terminates the program, which is fine. In
the test executable, though, we want validation failure to throw an exception
instead so we can catch it and continue testing other failure conditions. But
since destructors in C++11 are implicitly noexcept(true), that exception never
made it out of ~LLInstanceTracker(): it crashed the test program instead.
Declaring ~LLInstanceTracker() noexcept(false) solves that, allowing the test
program to catch the exception and continue.
However, if we unconditionally declare that, then every destructor anywhere in
the inheritance hierarchy for any LLInstanceTracker subclass must also be
noexcept(false)! That's way too pervasive, especially for functionality we
only need (or want) in a specific test executable.
Instead, make the CMake macros LL_ADD_PROJECT_UNIT_TESTS() and
LL_ADD_INTEGRATION_TEST() -- with which we define all viewer build-time tests
-- define two new command-line macros: LL_TEST=testname and LL_TEST_testname.
That way, preprocessor logic in a header file can detect whether it's being
compiled for production code or for a test executable.
(While at it, encapsulate in a new GET_OPT_SOURCE_FILE_PROPERTY() CMake macro
an ugly repetitive pattern. The builtin GET_SOURCE_FILE_PROPERTY() sets the
target variable to "NOTFOUND" -- rather than an empty string -- if the
specified property wasn't set. Every call to GET_SOURCE_FILE_PROPERTY() in
LL_ADD_PROJECT_UNIT_TESTS() was followed by a test for NOTFOUND and an
assignment to "". Wrap all that in a macro whose 'unset' value is "".)
Now llinstancetracker.h can detect when we're building the LLInstanceTracker
unit test executable, and *only then* declare ~LLInstanceTracker() as
noexcept(false). We #define LLINSTANCETRACKER_DTOR_NOEXCEPT to expand either
empty or noexcept(false), also detecting clang in C++11 mode. (It all works
fine without noexcept(false) until we turn on C++11 mode.)
We also use that macro for the StatBase class in lltrace.h. Turns out some of
the infrastructure headers required for tests in general, including the
LLInstanceTracker test, use LLInstanceTracker. Fortunately that appears to be
the only other class we must annotate this way for the LLInstanceTracker tests.
|
|
The LLMemory method getCurrentRSS() is defined to return the "resident set
size," but in fact on Windows it returns the WorkingSetSize -- and that's
actually what callers want from it: the total memory consumed by the
application for statistics purposes. It's not really clear what users gain by
knowing how much of that is resident in real memory, versus the total
consumption. So despite the commentation and the method name itself, on Mac
make it return the virtual size consumed.
|
|
|
|
|
|
|
|
to suppress fatal warnings in Visual Studio.
|
|
|
|
Evidently the Mac implementation of LLMemory::getCurrentRSS() goes back to
OS X 10.3, because there was a helpful comment of the form:
------
The API used here is not capable of dealing with 64-bit memory sizes, but is
available before 10.4.
Once we start requiring 10.4, we can use the updated API, which looks like
this:
[new current implementation]
Of course, this doesn't gain us anything unless we start building the viewer
as a 64-bit executable, since that's the only way for our memory allocation to
exceed 2^32.
------
Hey, guess what, we're building 64-bit viewers now!
Thank you, whoever thoughtfully noted that, both for calling out the issue and
sparing us the research. (The comment goes back to Subversion days, so hg
blame shows only the merge-to-release changeset.)
|
|
There were two distinct LLMemory methods getCurrentRSS() and
getWorkingSetSize(). It was pointless to have both: on Windows they were
completely redundant; on other platforms getWorkingSetSize() always returned
0. (Amusingly, though the Windows implementations both made exactly the same
GetProcessMemoryInfo() call and used exactly the same logic, the code was
different in the two -- as though the second was implemented without awareness
of the first, even though they were adjacent in the source file.)
One of the actual MAINT-6996 problems was due to the fact that
getWorkingSetSize() returned U32, where getCurrentRSS() returns U64. In other
words, getWorkingSetSize() was both useless *and* wrong. Remove it, and change
its one call to getCurrentRSS() instead.
The other culprit was that in several places, the 64-bit WorkingSetSize
returned by the Windows GetProcessMemoryInfo() call (and by getCurrentRSS())
was explicitly cast to a 32-bit data type. That works only when explicitly or
implicitly (using LLUnits type conversion) scaling the value to kilobytes or
megabytes. When the size in bytes is desired, use 64-bit types instead.
In addition to the symptoms, LLMemory was overdue for a bit of cleanup.
There was a 16K block of memory called reserveMem, the comment on which read:
"reserve 16K for out of memory error handling." Yet *nothing* was ever done
with that block! If it were going to be useful, one would think someone would
at some point explicitly free the block. In fact there was a method
freeReserve(), apparently for just that purpose -- which was never called. As
things stood, reserveMem served only to *prevent* the viewer from ever using
that chunk of memory. Remove reserveMem and the unused freeReserve().
The only function of initClass() and cleanupClass() was to allocate and free
reserveMem. Remove initClass(), cleanupClass() and the LLCommon calls to them.
In a similar vein, there was an LLMemoryInfo::getPhysicalMemoryClamped()
method that returned U32Bytes. Its job was simply to return a size in bytes
that could fit into a U32 data type, returning U32_MAX if the 64-bit value
exceeded 4GB. Eliminate that; change all its calls to getPhysicalMemoryKB()
(which getPhysicalMemoryClamped() used internally anyway). We no longer care
about any platform that cannot handle 64-bit data types.
|
|
of LLSafeHandle's referenced type. Using LLSingleton gives us a well-defined
time at which the "null instance" is deleted: LLSingletonBase::deleteAll().
|
|
|
|
The recent LLSingleton work added a hook that would run during the C++
runtime's final destruction of static objects. When the LAST LLSingleton in
any module was destroyed, its destructor would call
LLSingletonBase::deleteAll(). That mechanism was intended to permit an
application consuming LLSingletons to skip making an explicit deleteAll()
call, knowing that all instantiated LLSingleton instances would eventually be
cleaned up anyway.
However -- experience proves that kicking off deleteAll() processing during
the C++ runtime's final cleanup is too late. Too much has already been
destroyed. That call tends to cause more shutdown crashes than it resolves.
This commit deletes that whole mechanism. Going forward, if you want to clean
up LLSingleton instances, you must explicitly call
LLSingletonBase::deleteAll() during the application lifetime. If you don't,
LLSingleton instances will simply be leaked -- which might be okay,
considering the application is terminating anyway.
|
|
here: https://bitbucket.org/rider_linden/doduo-viewer/commits/4f39500cb46e879dbb732e6547cc66f3ba39959e?at=default
|
|
|
|
chat history
|
|
|
|
|
|
shouldn't be removed
|
|
The previous LLSafeHandle<T> implementation declares a static data member of
the template class but provides no (generic) definition, relying on particular
specializations to provide the definition. The data member is a function
pointer, which is called in one of the methods to produce a pointer to a
"null" T instance: that is, a dummy instance to be dereferenced in case the
wrapped T* is null.
Xcode 8.3's version of clang is bothered by the call, in a generic method,
through this (usually) uninitialized pointer. It happens that the only
specializations of LLSafeHandle do both provide definitions. I don't know
whether that's formally valid C++03 or not; but I agree with the compiler: I
don't like it.
Instead of declaring a public static function pointer which each
specialization is required to define, add a protected static method to the
template class. This protected static method simply returns a pointer to a
function-static T instance. This is functionally similar to a static
LLPointer<T> set on demand (as in the two specializations), including lazy
instantiation.
Unlike the previous implementation, this approach prohibits a given
specialization from customizing the "null" instance function. Although there
exist reasonable ways to support that (e.g. a related traits template), I
decided not to complicate the LLSafeHandle implementation to make it more
generally useful. I don't really approve of LLSafeHandle, and don't want to
see it proliferate. It's not clear that unconditionally dereferencing
LLSafeHandle<T> is in any way better than conditionally dereferencing
LLPointer<T>. It doesn't even skip the runtime conditional test; it simply
obscures it. (There exist hints in the code that at one time it might have
immediately replaced any wrapped null pointer value with the pointer to the
"null" instance, obviating the test at dereference time, but this is not the
current functionality. Perhaps it was only ever wishful thinking.)
Remove the corresponding functions and static LLPointers from the two classes
that use LLSafeHandle.
|
|
When a 'family' code isn't recognized, for instance, report the family code.
That should at least clue us in to look up and add an entry for the relevant
family code.
|
|
plus LLEventBatch::getSize(), setSize()
plus LLEventThrottle::getPostCount() and getDelay().
The interesting thing about LLEventThrottle::setInterval() and
LLEventBatch::setSize() is that either might cause an immediate flush().
|