Age | Commit message (Collapse) | Author |
|
If in fact we've managed to fix the APR bug writing to a Windows named pipe,
it should no longer be necessary to try to work around it by testing with a
much smaller data volume on Windows!
|
|
Ideally we'd love to be able to nail the underlying bug, but log output
suggests it may actually go all the way down to the OS level. To move forward,
try to bypass it.
|
|
We want to write a robust test that consistently works. On Windows, that
appears to require constraining the max message size. I, the coder, could try
submitting test runs of varying sizes to TC until I found a size that works...
but that could take quite a while. If I were clever, I might even use a manual
binary search. But computers are good at binary searching; there are even
prepackaged algorithms in the STL. If I were cleverer still, I could make the
test program itself search for size that works.
|
|
|
|
A static LLProcessPtr variable won't be destroyed until after procedural code
has shut down APR. The trouble is that LLProcess's destructor unregisters
itself from APR -- and, for an autokill LLProcess, attempts to kill the child
process. All that is ill-advised after APR shutdown.
Disable use of apr_pool_note_subprocess() mechanism. This should be another
viable way of coping with static autokill LLProcessPtr variables: when the
designated APR pool is cleaned up, APR promises to kill the child process. But
whether it's an APR bug or a calling error, the present (now disabled) call in
LLProcess results in OUR process, the viewer, getting SIGTERM when it asks to
clean up the global APR pool.
|
|
Apparently, at least on Mac, there are circumstances in which the very-large-
message test can take several times longer than normal, yet still complete
successfully. This is always the problem with timeouts: does timeout
expiration mean that the code in question is actually hung, or would it
complete if given a bit longer?
If very-large-message test fails, retry a few times with smaller sizes to try
to find a size at which the test runs reliably. The default size, ca 1MB, is
intended to be substantially larger than anything we'll encounter in the wild.
Is that "unreasonably" large? Is there a "reasonable" size at which the test
could consistently pass? Is that "reasonable" size still larger than what we
expect to encounter in practice? Need more information, hence this code.
|
|
Otherwise, a stuck child process could potentially hang the test, and thus the
whole viewer build.
|
|
|
|
|
|
|
|
It seems that on Windows, even 32K is too big: one in three load-test runs
fails with a duplicated block. Empirically, reducing it to 4K makes it much
more stable -- at least we can run successfully 100 consecutive times, which
is a step in the right direction.
|
|
It seems that under certain circumstances, write logic was duplicating a chunk
of the data being streamed down our pipe. But as this condition is only driven
with a very large data stream, eyeballing that data stream is tedious. Add
code to compare the raw received data with the expected stream, reporting
where and how they first differ.
|
|
While debugging mysterious problem on Windows, one potential failure mode to
rule out was the possibility that streaming std::ostringstream <<
LLSDNotationStreamer(large_LLSD) might itself cause trouble -- even before
attempting to write to the LLProcess::WritePipe. The debugging code validated
that the correct length is being reported, and that deserializing the
resulting buffer produces equivalent LLSD. This code verified correct
operation, and so has been disabled, as it's expensive at runtime.
|
|
Set LOGFAIL= one of ALL, DEBUG, INFO, WARN, ERROR, NONE. A passing test will
run silently, as now; but a failing test will replay log output at the
specified level or higher.
While at it, support LOGTEST environment variable, same values. This is like
setting --debug (or -d), but allows specifying an arbitrary level -- and,
unlike --debug, can be set for a TeamCity build config without modifying any
scripts or code.
Publish LLError::decodeLevel(std::string), previously private to llerror.cpp.
|
|
That lets us reliably declare the operator<<() free function inline, which
permits multiple translation units in the same executable to #include
"wrapllerrs.h".
|
|
While we're accumulating the 'length:' prefix, the present socket-based logic
reads 20 characters, then reads 'length' more, then discards any excess (in
case the whole 'length:data' packet ends up being less than 20 characters).
That's probably a bug: whatever characters follow that packet, however short
it may be, are probably the 'length:' prefix of the next packet. We probably
only get away with it because we probably never send packets that short.
Earlier llleap_test.cpp plugin logic still read 20 characters, then, if there
were any left after the present packet, cached them as the start of the next
packet. This is probably more correct, but complicated. Easier just to read
individual characters until we've seen 'length:', then try for exactly the
specified length over however many reads that requires.
|
|
In load testing, we have observed intermittent failures on Windows in which
LLSDNotationStreamer into std::ostringstream seems to bump into a hard limit
of 1048590 bytes. ostringstream reports that much buffered data and returns
that much -- even though, on examination, the notation-serialized stream is
incomplete at that point. It's our intention to load-test LLLeap and
LLProcess, not the local iostream implementation; we hope that this kind of
data volume is comfortably greater than actual usage. Back off the
load-testing max size a bit.
|
|
On Windows we ran into trouble trying to write a biggish (~1 MB) buffer of
data to the child process's stdin pipe with a single apr_file_write() call.
The child actually received corrupted data -- suggesting a possible bug in
either APR or Windows pipes; the same test driving the same logic worked fine
on Mac and Linux. Empirically, iterating over chunks of the buffered data is
more robust.
|
|
New llleap_test.cpp load testing turned up Windows issue in which plugin
process received corrupt packet, producing LLSDParseError. Add code to dump
the bad packet in that case -- but if LLSDParseError is willing to state the
offset of the problem, not ALL of the packet.
Quiet MSVC warning about little internal base class needing virtual destructor.
|
|
|
|
|
|
The code was using LLProcess::ReadPipe::get_istream().read(), but that's much
uglier, as it requires constructing a char* buffer etc. etc.
|
|
|
|
These tests rule out corruption as we cross buffer boundaries in OS pipes and
the LLLeap implementation itself.
|
|
Previous "read N of M bytes" wording implied that the child had M bytes to
send, but we only read N of them. In reality we have no idea how many bytes
the child is trying to send, only how many the OS is willing to deliver at
this moment. To me, "filled N of M bytes" more clearly implies that M is the
buffer size.
|
|
It only took a few examples of trying to wrangle notation LLSD as string data
to illustrate how clumsy that is. I'd forgotten that a couple other TUT tests
already invoke Python code that depends on the llsd module. The trick is to
recognize that at least as of now, there's still an obsolete version of the
module in the viewer's own source tree. Python code is careful to try
importing llbase.llsd before indra.base.llsd, so that if/when we finally do
clear indra/lib/python from the viewer repo, we need only require that llbase
be installed on every build machine.
|
|
Migrate logic from specific test to common reader module, notably parsing the
wakeup message containing the reply-pump name.
Make test script post to Result struct to communicate success/failure to C++
TUT test, rather than just writing to log.
Make test script insensitive to key order in serialized LLSD::Map.
|
|
|
|
|
|
Instantiating LLLeap with a command to execute a particular child process sets
up machinery to speak LLSD Event API Plugin protocol with that child process.
LLLeap is an LLInstanceTracker subclass, so the code that instantiates need
not hold the pointer. LLLeap monitors child-process termination and deletes
itself when done.
|
|
Of course, given the way the log machinery works, it's really "everything at
that level or stronger."
|
|
All known callers were using ensure(! withMessage(...).empty()). Centralize
that logic. Make failure message report the string being sought and the log
messages in which it wasn't found.
In case someone does want to permit the search to fail, add an optional
'required' parameter, default true.
Leverage new functionality in llprocess_test.cpp.
|
|
We were using uniform macro to report the APR function and its C++ parameter
expressions. But specifically for apr_proc_create() failure, better to report
the command we're attempting to execute.
|
|
Giving more unit tests the ability to capture and examine log output is
generally useful. Renaming the class just makes it less ambiguous: what's a
TestRecorder? Something that records tests?
|
|
We can't count on every child process reading everything we try to write to
it. And if the child terminates with WritePipe data still pending, unless we
explicitly suppress it, Posix will hit us with SIGPIPE. That would terminate
the calling process, boom. "Ignoring" it means APR gets the correct errno,
passes it back to us, we log it, etc.
|
|
|
|
Previously one might get process-terminated notification but still have to
wait for the child process's final data to arrive on one or more ReadPipes.
That required complex consumer timing logic to handle incomplete pending
ReadPipe data, e.g. a partial last line with no terminating newline. New code
guarantees that by the time LLProcess sends process-terminated notification,
all pending pipe data will have been buffered in ReadPipes.
Document LLProcess::ReadPipe::getPump() notification event; add "eof" key.
Add LLProcess::ReadPipe::getline() and read() convenience methods.
Add static LLProcess::getline() and basename() convenience methods, publishing
logic already present elsewhere.
Use ReadPipe::getline() and read() in unit tests.
Add unit test for "eof" event on ReadPipe::getPump().
Add unit test verifying that final data have been buffered by termination
notification event.
|
|
|
|
|
|
We want to verify the sequence:
LLInstanceTracker constructor adds instance to underlying container
Subclass constructor throws exception
LLInstanceTracker destructor removes instance from underlying container.
|
|
|
|
For the T* specialization (no string, or whatever, key), the original
getInstance() method simply returned the passed-in T* value. It was defined,
as the comments noted, for completeness of the analogy with the keyed
LLInstanceTracker specialization.
It turns out, though, that getInstance(T*) can still be useful to ask whether
the T* you have in hand still references a valid T instance. Support that
usage.
|
|
This is an important differentiator between getTokens() and the present
LLCommandLineParser::parseCommandLineString() logic: you cannot currently
--set SomeVar to an empty string value because parseCommandLineString()
discards empty strings.
|
|
run_build_test.py already has the capability to set environment variables, and
we may as well direct it to set PYTHON to the running Python interpreter. That
completely eliminates one level of process wrapper.
|
|
|
|
We didn't have any tokenizer suitable for scanning something like a bash
command line. We do have a couple hacks, e.g. LLExternalEditor::tokenize() and
LLCommandLineParser::parseCommandLineString(). Both try to work around
boost::tokenizer limitations; but existing boost::tokenizer support just
doesn't address this case. Neither of the above is available as a general
scanner anyway, and parseCommandLineString() fails outright when passed "".
New getTokens() also distinguishes between "drop delimiters" (e.g. space,
return, newline) to be discarded from the token stream, versus "keep
delimiters" (e.g. "+-*/") to be returned as tokens in their own right.
There's an overload that honors escapes and a more efficient one that doesn't;
each has a convenience overload that returns the scanned string vector rather
than requiring a separate declaration.
Tweak and comment older getTokens() implementation.
Add unit tests for both old and new getTokens() implementations.
Break out StringVec and std::ostream << StringVec from
indra/llcommon/tests/listener.h to StringVec.h: that's coming in handy for a
number of different TUT test sources.
|
|
|
|
|
|
removed a couple more unnecessary string copies from unfortunate LLSD behavior.
reviewed with simon, post review from Richard.
|
|
Clarify wording in some of the doc comments; be a bit more explicit about some
of the parameter fields.
Make some query methods 'const'.
Change default LLProcess::ReadPipe::getLimit() value to 0: don't post any
incoming data with notification event unless caller requests it. But do post
pertinent FILESLOT in case caller reuses same listener for both stdout and
stderr.
Use more idiomatic, readable syntax for accessing LLProcess::Params data.
|