Age | Commit message (Collapse) | Author |
|
LF, and trim trailing whitespaces as needed
|
|
|
|
ensure inventory skeleton loading doesn't block the message system from processing packets.
|
|
|
|
# Conflicts:
# indra/llcommon/llsdserialize.h
|
|
(cherry picked from commit eb0516b9940f200b32349d611f38f1ccee48005d)
|
|
Pass llssize instead of S32.
|
|
|
|
|
|
# Conflicts:
# indra/cmake/CMakeLists.txt
# indra/llcommon/llsdserialize.cpp
# indra/llcommon/llsdserialize.h
# indra/llcommon/tests/llleap_test.cpp
# indra/newview/llfilepicker.h
# indra/newview/llfilepicker_mac.h
# indra/newview/llfilepicker_mac.mm
# indra/newview/skins/default/xui/en/strings.xml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Since LLSDSerialize::SIZE_UNLIMITED is negative, passing that through unsigned
size_t parameters could result in peculiar behavior.
|
|
When sending multiple LEAP packets in the same file (for testing convenience),
use a length prefix instead of delimiting with '\n'. Now that we allow a
serialization format that includes an LLSD format header (e.g.
"<?llsd/binary?>"), '\n' is part of the packet content. But in fact, testing
binary LLSD means we can't pick any delimiter guaranteed not to appear in the
packet content.
Using a length prefix also lets us pass a specific max_bytes to the subject
C++ LLSD parser.
Make llleap_test.cpp use new freestanding Python llsd package when available.
Update Python-side LEAP protocol code to work directly with encoded bytes
stream, avoiding bytes<->str encoding and decoding, which breaks binary LLSD.
Make LLSDSerialize::deserialize() recognize LLSD format header case-
insensitively. Python emits and checks for "llsd/binary", while LLSDSerialize
emits and checks for "LLSD/Binary". Once any of the headers is recognized,
pass corrected max_bytes to the specific parser.
Make deserialize() more careful about the no-header case: preserve '\n' in
content. Introduce debugging code (disabled) because it's a little tricky to
recreate.
Revert LLLeap child process stdout parser from LLSDSerialize::deserialize() to
the specific LLSDNotationParser(), as at present: the generic parser fails one
of LLLeap's integration tests for reasons that remain mysterious.
|
|
Absent a header from LLSDSerialize::serialize(), make deserialize()
distinguish between XML or notation by recognizing an initial '<'.
|
|
LLSDSerialize::serialize() emits a header string, e.g. "<? llsd/notation ?>"
for notation format. Until now, LLSDSerialize::deserialize() has required that
header to properly decode the input stream.
But none of LLSDBinaryFormatter, LLSDXMLFormatter or LLSDNotationFormatter
emit that header themselves. Nor do any of the Python llsd.format_binary(),
format_xml() or format_notation() functions. Until now, you could not use
LLSD::deserialize() to parse an arbitrary-format LLSD stream serialized by
anything but LLSDSerialize::serialize().
Change LLSDSerialize::deserialize() so that if no header is recognized,
instead of failing, it attempts to parse as notation. Add tests to exercise
this case.
The tricky part about this processing is that deserialize() necessarily reads
some number of bytes from the input stream first, to try to recognize the
header. If it fails to do so, it must prepend the bytes it has already read to
the rest of the input stream since they're probably the beginning of the
serialized data.
To support this use case, introduce cat_streambuf, a std::streambuf subclass
that (virtually) concatenates other std::streambuf instances. When read by a
std::istream, the sequence of underlying std::streambufs appears to the
consumer as a single continuous stream.
|
|
Introduce LLSD template constructors and assignment operators to disambiguate
construction or assignment from any integer type to Integer, likewise any
floating point type to Real. Use new narrow() function to validate
conversions.
For LLSD method parameters converted from LLSD::Integer to size_t, where the
method previously checked for a negative argument, make it now check for
size_t converted from negative: in other words, more than S32_MAX. The risk of
having a parameter forced from negative to unsigned exceeds the risk of a
valid length or index over that max.
In lltracerecording.cpp's PeriodicRecording, now that mCurPeriod and
mNumRecordedPeriods are size_t instead of S32, defend against subtracting 1
from 0.
Use narrow() to validate newly-introduced narrowing conversions.
Make llclamp() return the type of the raw input value, even if the types of
the boundary values differ.
std::ostream::tellp() no longer returns a value we can directly report as a
number. Cast to U64.
|
|
It's a little distressing how often we have historically coded S32 or U32 to
pass a length or index.
There are more such assumptions in other viewer subdirectories, but this is a
start.
|
|
|
|
|
|
for compatibility with Python llbase.llsd.parse().
The Python parse() currently requires uppercase hex digits for b16"hex"
coding; lowercase hex digits cause it to raise LLSDParseError.
|
|
LLSDNotationFormatter (also LLSDNotationStreamer that uses it, plus
operator<<(std::ostream&, const LLSD&) that uses LLSDNotationStreamer) is most
useful for displaying LLSD to a human, e.g. for logging. Having the default
dump raw binary bytes into the log file is not only suboptimal, it can
truncate the output if one of those bytes is '\0'. (This is a problem with the
logging subsystem, but that's a story for another day.)
Use OPTIONS_PRETTY_BINARY wherever there is a default LLSDFormatter
::EFormatterOptions argument.
Also, allow setting LLSDFormatter subclass boolalpha(), realFormat() and
format(options) using optional constructor arguments. Naturally, each subclass
that supports this must accept and forward these constructor arguments to its
LLSDFormatter base class constructor.
Fix a couple bugs in LLSDNotationFormatter::format_impl() for an LLSD::Binary
value with OPTIONS_PRETTY_BINARY:
- The code unconditionally emitted a b(len) type prefix followed by either raw
binary or hex, depending on the option flag. OPTIONS_PRETTY_BINARY caused it
to emit "0x" before the hex representation of the data. This is wrong in
that it can't be read back by either the C++ or the Python LLSD parser.
Correct OPTIONS_PRETTY_BINARY formatting consists of b16"hex digits" rather
than b(len)"raw bytes".
- Although the code did set hex mode, it didn't set either the field width or
the fill character, so that a byte value less than 16 would emit a single
digit rather than two.
Instead of having one LLSDFormatter::format() method with an optional options
argument, declare two overloads. The format() overload without options passes
the mOptions data member to the overload accepting options.
Refactor the LLSDFormatter family, hoisting the recursive format_impl() method
(accepting level) to a pure virtual method at LLSDFormatter base-class level.
Most subclasses therefore need not override either base-class format() method,
only format_impl(). In fact the short format() overload isn't even virtual.
Consistently use LLSDFormatter::EFormatterOptions enum as the options
parameter wherever such options are accepted.
|
|
# Conflicts:
# indra/newview/pipeline.cpp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
replace llinfos, lldebugs, etc with new LL_INFOS(), LL_DEBUGS(), etc.
|
|
|