Age | Commit message (Collapse) | Author |
|
At this point, inspect(landmarks) just returns "<userdata 1>".
|
|
|
|
|
|
We may well want to leverage that API for additional queries that could
potentially return large datasets.
|
|
|
|
result_view(key_length, fetch) returns a virtual view of a potentially-large
C++ result set. Given the result-set key, its total length and a function
fetch(key, start) => (slice, adjusted start), the read-only table returned by
result_view() manages indexed access and table iteration over the entire
result set, fetching a slice at a time as required.
Change LLInventory to use result_view() instead of only ever fetching the
first slice of a result set.
TODO: This depends on the viewer's "LLInventory" listener returning the total
result set length as well as the result set key. It does not yet return the
length.
|
|
Introduce abstract base class InvResultSet, derived from LLIntTracker so each
instance has a unique int key. InvResultSet supports virtual getLength() and
getSlice() operations. getSlice() returns an LLSD array limited to
MAX_ITEM_LIMIT result set entries. It permits retrieving a "slice" of the
contained result set starting at an arbitrary index. A sequence of getSlice()
calls can eventually retrieve a whole result set.
InvResultSet has subclasses CatResultSet containing cat_array_t, and
ItemResultSet containing item_array_t. Each implements a virtual method that
produces an LLSD map from a single array item.
Make LLInventoryListener::getItemsInfo(), getDirectDescendants() and
collectDescendantsIf() instantiate heap CatResultSet and ItemResultSet objects
containing the resultant LLPointer arrays, and return their int keys for
categories and items.
Add LLInventoryListener::getSlice() and closeResult() methods that accept the
int keys of result sets. getSlice() returns the requested LLSD array to its
caller, while closeResult() is fire-and-forget.
Because bulk data transfer is now performed by getSlice() rather than by
collectDescendantsIf(), change the latter's "limit" default to unlimited.
Allow the C++ code to collect an arbitrary number of LLPointer array entries,
as long as getSlice() limits retrieval overhead.
Spell "descendants" correctly, unlike the "descendents" spelling embedded in
the rest of the viewer... sigh. Make the Lua module provide both spellings.
Make MAX_ITEM_LIMIT a U32 instead of F32.
In LLInventory.lua, store int result set keys from 'getItemsInfo',
'getDirectDescendants' and 'collectDescendantsIf' in a table with a close()
function. The close() function invokes 'closeResult' with the bound int keys.
Give that table an __index() metamethod that recognizes only 'categories' and
'items' keys: anything else returns nil. For either of the recognized keys,
call 'getSlice' with the corresponding result set key to retrieve (the initial
slice of) the actual result set. Cache that result. Lazy retrieval means that
if the caller only cares about categories, or only about items, the other
result set need never be retrieved at all.
This is a first step: like the previous code, it still retrieves only up to
the first 100 result set entries. But the C++ code now supports retrieval of
additional slices, so extending result set retrieval is mostly Lua work.
Finally, wrap the table-with-metamethod in an LL.setdtor() proxy whose
destructor calls its close() method to tell LLInventoryListener to destroy the
CatResultSet and ItemResultSet with the bound keys.
|
|
Replace the global next(), pairs() and ipairs() functions with a C++ function
that drills down through layers of setdtor() proxy objects and then forwards
the updated arguments to the original global function.
Add a Luau __iter() metamethod to setdtor() proxy objects that, like other
proxy metamethods, drills down to the underlying _target object. __iter()
recognizes the case of a _target table which itself has a __iter() metamethod.
Also add __idiv() metamethod to support integer division.
Add tests for proxy // division, next(proxy), next(proxy, key), pairs(proxy),
ipairs(proxy) and 'for k, v in proxy'. Also test the case where the table
wrapped in the proxy has an __iter() metamethod of its own.
|
|
Trim redundant output from test_setdtor.lua.
|
|
`setdtor('description', object, function)` returns a proxy userdata object
referencing object and function. When the proxy is garbage-collected, or at
the end of the script, its destructor calls `function(object)`.
The original object may be retrieved as `proxy._target`, e.g. to pass it to
the `table` library. The proxy also has a metatable with metamethods
supporting arithmetic operations, string concatenation, length and table
indexing. For other operations, retrieve `proxy._target`. (But don't assign to
`proxy._target`. It will appear to work, in that subsequent references to
`proxy._target` will retrieve the replacement object -- however, the
destructor will still call `function(original object)`.)
Fix bugs in `lua_setfieldv()`, `lua_rawgetfield()` and `lua_rawsetfield()`.
Add C++ functions `lua_destroyuserdata()` to explicitly destroy a
`lua_emplace<T>()` userdata object, plus `lua_destroybounduserdata()`. The
latter can bind such a userdata object as an upvalue to pass to `LL.atexit()`.
Make `LL.help()` and `LL.leaphelp()` help text include the `LL.` prefix.
|
|
Allow UI to have lazily-loaded submodules.
|
|
|
|
|
|
|
|
|
|
In particular, where the raw leap.request().response call would return
{OK_okcancelbuttons=true}, just return the string 'OK' or 'Cancel'.
Update existing consumer scripts.
|
|
|
|
This way encourages "UI = require 'UI'; UI.Floater"
instead of just "Floater = require 'Floater'".
Moreover, now we don't need UI to maintain a list of allowed submodules;
that's effected by membership in the subdirectory.
|
|
Equip UI with an __index metamethod. When someone references an unknown
key/field in UI, require() that module and cache it for future reference.
Add util.setmetamethods() as a way to find or create a metatable on a
specified table containing specified metamethods.
Exercise the new functionality by referencing UI.popup in test_popup.lua.
|
|
displayed consistently
|
|
|
|
|
|
|
|
|
|
Specifically, defend against a callback that runs so long it suspends at a
point after the next timer tick.
|
|
Use a static unordered_map to allow a function receiving (lua_State* L) to
look up the LuaState instance managing that lua_State. We've thought about
this from time to time already. LuaState's constructor creates the map entry;
its destructor removes it; the new static getParent(lua_State* L) method
performs the lookup.
Migrate lluau::set_interrupts_counter() and check_interrupts_counter() into
LuaState member functions. Add a new mInterrupts counter for them.
Importantly, LuaState::check_interrupts_counter(), which is indirectly called
by a lua_callbacks().interrupt function, no longer performs any Lua stack
operations. Empirically, it seems the Lua engine is capable of interrupting
itself at a moment when re-entry confuses it.
Change previous lluau::set_interrupts_counter(L, 0) calls to
LuaState::getParent(L).set_interrupts_counter(0).
Also add LuaStackDelta class, and a lua_checkdelta() helper macro, to verify
that the Lua data stack depth on exit from a block differs from the depth on
entry by exactly the expected amount. Sprinkle lua_checkdelta() macros in
likely places.
|
|
|
|
|
|
|
|
luaL_checkstack() accepts a third parameter which is included in the stack
overflow error message. We've been passing nullptr, leading to messages of the
form "stack overflow ((null))". lluau_checkstack() implicitly passes
__FUNCTION__, so we can distinguish which underlying luaL_checkstack() call
encountered the stack overflow condition.
Also, when calling each atexit() function, pass Luau's debug.traceback()
function as the lua_pcall() error handler. This should help diagnose errors in
atexit() functions.
|
|
|
|
Add Throttle and LogThrottle classes to manage throttled APIs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Add test_flycam.lua to exercise the smaller intervals.
|
|
Thanks, Maxim.
|
|
|
|
Also add Region.lua.
|
|
|
|
Also update the 'UI' help text to reflect its more general nature.
Mention 0-relative rank in the xxToolbarBtn operation help text.
|
|
|
|
|
|
The viewer's main thread's main fiber is responsible for coordinating just
about everything. With the default round_robin fiber scheduling algorithm,
launching too many additional fibers could starve the main fiber, resulting in
visible lag.
This custom scheduler tracks when it switches to and from the main fiber, and
at each context switch, how long it's been since the last time the main fiber
ran. If that exceeds a certain timeslice, it jumps the main fiber to the head
of the queue and resumes that instead of any other ready fiber.
|
|
|
|
|
|
|