Age | Commit message (Collapse) | Author |
|
|
|
LF, and trim trailing whitespaces as needed
|
|
fallback fonts.
With the emojis support, a new font was added, which not only provides emojis
but also fancy colorful replacements for UTF-8 characters that used to be
supported by our fallback (monochrome) fonts: this causes discrepancies and
unwanted/undesired changes in scripted objects menus (e.g. an empty circle or
square may render as a black, full one, a heart may render red instead of white),
not to mention the larger font size used by the emoji characters...
This patch restores the aspect of such menus/dialogs/UI elements with UTF-8
characters that *are* supported by the usual fallback fonts (fonts which may
also vary from one viewer to another, and from one OS to another), so that
everything keeps working/rendering as it always did so far, while not impairing
the use of new colorful emojis.
This second proposal ensures that:
- "genuine" emojis (in the 0x1f000-0x1ffff range), will *always* be rendered
using the new emojis font (this solves, for example, the monochrome "yellow
faces" issue seen with some characters in my first proposal).
- Special UTF-8 characters (in the 0x2000-0x32FF range) which have been used by
scripters so far, will render as they used to, using the monochrome fallback
fonts (this repairs scripted dialogs menus).
- Remaining special characters, that do not have a corresponding glyph in the
monochrome font, but do have one in the emojis font, will use the latter font
to render.
It also got the nice side-effect of removing the dependency on the ICU4C library.
Note however that the recent commit:
https://github.com/secondlife/viewer/commit/326055ba82c22fedde186c6a56bafd4fe87e613a
will need to be reverted to allow this patch to actually fix scripted dialogs.
Also, some cleanup might be needed in skins/default/xui/*/emoji_characters.xml to
remove from it the special UTF-8 characters that will no longer be rendered with
fanciful colors, but instead with the monochrome font glyphs.
|
|
# Conflicts:
# indra/llcommon/llstring.cpp
# indra/llcommon/llstring.h
|
|
|
|
|
|
|
|
a preset...' option of the 'Preferences' floater
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
It's a little distressing how often we have historically coded S32 or U32 to
pass a length or index.
There are more such assumptions in other viewer subdirectories, but this is a
start.
|
|
|
|
|
|
|
|
That function wants to pass a code_page to ll_convert_string_to_wide(), but
the code_page parameter was being mistaken for the length parameter, leading
to access violations.
|
|
Use new ll_convert_forms() macro in llstring.h to declare, for each
wide-string conversion function of interest, four overloads. The real one, the
nontrivial one, is (const char*, size_t len), implemented in llstring.cpp. Then
(const string&, size_t len), (const char*) and (const string&) are each
trivially implemented with an inline call to (const char*, size_t len).
Notably, we change all S32 len parameters to size_t. Using S32 is old skool.
Tweak each nontrivial implementation in llstring.cpp to accept (const char*,
size_t len) instead of (const string&) with or without explicit length.
Eliminate from llstring.cpp trivial overloads (deriving length from either a
const char* or from a string), since those are now inline in the header.
Of course three of those overloads will be unified once we enable C++17 and
change each relevant parameter to std::string_view, but we're not yet there.
Meanwhile, this suite of overloads minimizes, to the best of our ability, new
string allocations solely for parameter passing. And use of a macro means we
need only change the macro once we get std::string_view.
We take this step because some use cases require (const char*), some require
(const string&, size_t len), others (const char*, size_t len) ... We were
missing some key overloads, and had to work around them by instantiating new
string objects (necessitating both allocation and character copying) just to
pass the desired parameter. Using the macro ensures this consistent set of
overloads for every wide-string conversion function.
Additionally, knowing that the ugly-name overloads exist, ll_convert_forms()
implicitly defines corresponding ll_convert<TARGET>() overloads.
Streamline declarations of utf16str_to_wstring(), wstring_to_utf16str(),
utf8str_to_utf16str(), utf16str_to_utf8str(), utf8str_to_wstring(),
wstring_to_utf8str(), ll_convert_wide_to_wstring() and
ll_convert_wstring_to_wide() using ll_convert_forms().
Use corresponding new ll_convert_cp_forms() macro to declare consistent
overloads for conversion functions accepting an optional unsigned int
code_page parameter. We used to delegate to the .cpp file the implementation
of each overload accepting code_page so llstring.h need not include the
Windows header defining the CP_UTF8 default; this is more simply accomplished
by introducing a small ll_wstring_default_code_page() function to retrieve it
from the .cpp file. That lets us specify the code_page parameter as optional,
using that function as its default value.
Use ll_convert_cp_forms() to streamline declarations of
ll_convert_wide_to_string() and ll_convert_string_to_wide().
Introduce real implementations of ll_convert_wide_to_wstring() and
ll_convert_wstring_to_wide(). The previous implementations merely copied
individual characters, which is wrong: when we convert UTF16LE to UTF32, we
can and should fold multi-character UTF16LE encodings to the corresponding
single UTF32 character. The real implemenations leverage our awareness that
both llutf16string and Windows std::wstring (either variant) use UTF16LE
encoding, so we can reuse the corresponding llutf16string conversions.
Introduce generic ll_convert_length() function, specialized as either
std::strlen() or std::wcslen() depending on parameter type. (Even if
std::wcslen() is derived from classic C, why doesn't the C++ standard library
define a std::strlen(const wchar_t*) overload to call it?)
Fix ll_convert_alias()'s ll_convert_impl specialization's operator() to accept
boost::call_traits::param_type, so we can pass (e.g.) const std::wstring& but
also const wchar_t* instead of const wchar_t*&.
|
|
LLMemTracked, introduce alignas, hook most/all reamining allocs, disable synchronous occlusion, and convert frequently accessed LLSingletons to LLSimpleton
|
|
https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/snprintf-snprintf-snprintf-l-snwprintf-snwprintf-l?view=vs-2017
"Beginning with the UCRT in Visual Studio 2015 and Windows 10, snprintf is no
longer identical to _snprintf. The snprintf function behavior is now C99
standard compliant."
In other words, VS 2015 et ff. snprintf() now promises to nul-terminate the
buffer even in the overflow case, which is what snprintf_hack::snprintf() was
for.
This removal was motivated by ambiguous-call errors generated by VS 2017 for
library snprintf() vs. snprintf_hack::snprintf().
|
|
Twemoji as the viewer's fallback for all emoji blocks
|
|
Move Windows-flavored llstring_getoptenv() to Windows-specific section of
llstring.cpp.
boost::optional type must be stated explicitly to initialize with a value.
On platforms where llwchar is the same as wchar_t, LLWString is the same as
std::wstring, so ll_convert specializations for std::wstring would duplicate
those for LLWString. Defend against that.
The compilers we use don't like 'return condition? { expr } : {}', in which we
hope to construct and return an instance of the declared return type without
having to restate the type. It works to use an explicit 'if' statement.
|
|
Add ll_convert<TO, FROM> template, used as (e.g.):
ll_convert<std::string>(value_of_some_other_string_type);
There is no generic template implementation -- the template exists solely to
provide generic aliases for a bewildering family of llstring.h string-
conversion functions with highly-specific names. There's a generic
implementation, though, for the degenerate case where FROM and TO are
identical.
Add ll_convert<> specialization aliases for most of the string-conversion
functions declared in llstring.h, including the Windows-specific ones
involving llutf16string and std::wstring.
Add a mini-lecture in llstring.h about appropriate use of string types on
Windows.
Add LL_WCHAR_T_NATIVE llpreprocessor.h macro so we can detect whether to
provide separate conversions for llutf16string and std::wstring, or whether
those would collide because the types are identical.
Add inline ll_convert_wide_to_string(const std::wstring&) overloads so caller
isn't required to call arg.c_str(), which naturally permits an ll_convert
alias.
Add ll_convert_wide_to_wstring(), ll_convert_wstring_to_wide() as placeholders
for converting between Windows std::wstring and Linden LLWString, with
corresponding ll_convert aliases. We don't yet have library code to perform
such conversions officially; for now, just copy characters.
Add LLStringUtil::getenv(key) and getoptenv(key) functions. The latter returns
boost::optional<string_type> in case the caller needs to detect absence of a
given environment variable rather than simply accepting a default value.
Naturally getenv(), which accepts a default, is implemented using getoptenv().
getoptenv(), in turn, is implemented using an underlying llstring_getoptenv().
On Windows, llstring_getoptenv() returns boost::optional<std::wstring> (based
on GetEnvironmentVariableW()), whereas elsewhere, llstring_getoptenv() returns
boost::optional<std::string> (based on classic Posix getenv()).
The beauty of generic ll_convert is that the portable LLStringUtilBase<T>::
getoptenv() template can call the platform-specific llstring_getoptenv() and
transparently perform whatever conversion is necessary to return the desired
string_type.
Add windows_message<T>(error) template, with an overload that implicitly calls
GetLastError(). We provide a single concrete windows_message<std::wstring>()
implementation because that's what we get from Windows FormatMessageW() --
everything else is a generic conversion to the desired target string type.
This obviates llprocess.cpp's previous WindowsErrorString() implementation --
reimplement using windows_message<std::string>().
|
|
Instead of returning a wchar_t* and requiring the caller to delete it later,
return a std::basic_string<wchar_t> that's self-cleaning. If the caller wants
a wchar_t*, s/he can call c_str() on the returned string.
Default the code_page parameter to CP_UTF8, since we try to be really
consistent about using UTF-8 encoding for all our internal std::strings.
|
|
especially for animated objects.
|
|
|
|
|
|
wide char paths; on other platforms they are now just typedefs to the std classes
|
|
respectively
|
|
respectively
|
|
test
|
|
|
|
another attempt to move mem stat into base class
|
|
replace llinfos, lldebugs, etc with new LL_INFOS(), LL_DEBUGS(), etc.
|
|
|
|
|
|
|
|
|
|
|
|
cleaning up build
moved most includes of windows.h to llwin32headers.h to disable min/max macros, etc
streamlined Time class and consolidated functionality in BlockTimer class
llfasttimer is no longer included via llstring.h, so had to add it manually in several places
|
|
illegal length of buffer too.
|
|
We didn't have any tokenizer suitable for scanning something like a bash
command line. We do have a couple hacks, e.g. LLExternalEditor::tokenize() and
LLCommandLineParser::parseCommandLineString(). Both try to work around
boost::tokenizer limitations; but existing boost::tokenizer support just
doesn't address this case. Neither of the above is available as a general
scanner anyway, and parseCommandLineString() fails outright when passed "".
New getTokens() also distinguishes between "drop delimiters" (e.g. space,
return, newline) to be discarded from the token stream, versus "keep
delimiters" (e.g. "+-*/") to be returned as tokens in their own right.
There's an overload that honors escapes and a more efficient one that doesn't;
each has a convenience overload that returns the scanned string vector rather
than requiring a separate declaration.
Tweak and comment older getTokens() implementation.
Add unit tests for both old and new getTokens() implementations.
Break out StringVec and std::ostream << StringVec from
indra/llcommon/tests/listener.h to StringVec.h: that's coming in handy for a
number of different TUT test sources.
|
|
string replacement, e.g. [[FOO]]
|
|
|
|
/Users/Aimee/Documents/Work/Linden-Lab/Development/viewer/convert/viewer-identity-evolution
|
|
The crash was caused by erroneous getting of month name from vector with week day names in LLStringUtil::formatDatetime().
This code woth introduced in June, so though it didn't work properly, it didn't cause the crash(cause June is 5th month). But when
number of current month exceeded number of days in week(this happened in August cause it is 8th) code started getting 8th element from
vector with 7. This caused the crash. It reproduced only on Japanese locale because only there code that caused it was used(see STORM-177
for details). This changeset seems to fix STORM-177 too.
- Used vector with months names where it should be.
|
|
|