summaryrefslogtreecommitdiff
path: root/indra/newview
AgeCommit message (Collapse)Author
2024-04-02Add startup.lua module with startup.ensure(), wait() functions.Nat Goodspeed
This lets a calling script verify that it's running at the right point in the viewer's life cycle. A script that wants to interact with the SL agent wouldn't work if run from the viewer's command line -- unless it calls startup.wait("STATE_STARTED"), which pauses until login is complete. Modify test_luafloater_demo.lua and test_luafloater_gesture_list.lua to find their respective floater XUI files in the same directory as themselves. Make them both capture the reqid returned by the "showLuaFloater" operation, and filter for events bearing the same reqid. This paves the way for a given script to display more than one floater concurrently. Make test_luafloater_demo.lua (which does not require in-world resources) wait until 'STATE_LOGIN_WAIT', the point at which the viewer has presented the login screen. Make test_luafloater_gesture_list.lua (which interacts with the agent) wait until 'STATE_STARTED', the point at which the viewer is fully in world. Either or both can now be launched from the viewer's command line.
2024-04-02Streamline std::filesystem::path conversions in LLRequireResolver.Nat Goodspeed
Make LLRequireResolver capture std::filesystem::path instances, instead of std::strings, for the path to resolve and the source directory. Store the running script's containing directory instead of calling parent_path() over and over. Demote Lua LL.post_on() logging to DEBUG level instead of INFO.
2024-04-02Defend leap.request(), generate() from garbage collection.Nat Goodspeed
Earlier we had blithely designated the 'pending' list (which stores WaitForReqid objects for pending request() and generate() calls) as a weak table. But the caller of request() or generate() does not hold a reference to the WaitForReqid object. Make pending hold "strong" references. Private collections (pending, waitfors) and private scalars that are never reassigned (reply, command) need not be entries in the leap table.
2024-03-29Merge branch 'release/luau-scripting' into lua-startupNat Goodspeed
2024-03-29Merge pull request #1071 from secondlife/lua-new-luastatenat-goodspeed
Run each script file with new LuaState
2024-03-28Merge branch 'lua-hangfix' into lua-startup.Nat Goodspeed
2024-03-28Remove rest of prototype UI access.Nat Goodspeed
2024-03-28Use LLApp::setQuitting(). Expect killed-script error.Nat Goodspeed
2024-03-28Terminate Lua scripts hanging in LL.get_event_next().Nat Goodspeed
Make LuaListener listen for "LLApp" viewer shutdown events. On receiving such, it closes its queue. Then the C++ coroutine calling getNext() wakes up with an LLThreadSafeQueue exception, and calls LLCoros::checkStop() to throw one of the exceptions recognized by LLCoros::toplevel(). Add an llluamanager_test.cpp test to verify this behavior.
2024-03-28Remove llluamanager.cpp "FIXME extremely hacky way" cruft.Nat Goodspeed
2024-03-27Merge 'release/luau-scripting' of secondlife/viewer into lua-startupNat Goodspeed
2024-03-27Run each script file with new LuaStateMnikolenko Productengine
2024-03-27Enhance Lua debugging output.Nat Goodspeed
Don't use "debug" as the name of a function to conditionally write debug messages: "debug" is a Luau built-in library, and assigning that name locally would shadow the builtin. Use "dbg" instead. Recast fiber.print_all() as fiber.format_all() that returns a string; then print_all() is simply print(format_all()). This refactoring allows us to use dbg(format_all()) as well. Add a couple new dbg() messages at fiber state changes.
2024-03-27poetryNat Goodspeed
2024-03-27Run loaded `require()` module on Lua's main thread.Nat Goodspeed
The problem with running a `require()` module on a Lua coroutine is that it prohibits calling `leap.request()` at module load time. When a coroutine calls `leap.request()`, it must yield back to Lua's main thread -- but a `require()` module is forbidden from yielding. Running on Lua's main thread means that (after potentially giving time slices to other ready coroutines) `fiber.lua` will request the response event from the viewer, and continue processing the loaded module without having to yield.
2024-03-26Merge branch 'release/luau-scripting' into luau-keystrokeMnikolenko Productengine
2024-03-26update scripts to use fiber.launch()Mnikolenko Productengine
2024-03-25util.lua claims functions are in alpha order - make it so.Nat Goodspeed
Also streamline util.contains(), given table.find().
2024-03-25Add LL.check_stop() entry point and call it in fiber scheduler().Nat Goodspeed
fiber.lua's scheduler() is greedy, in the sense that it wants to run every ready Lua fiber before retrieving the next incoming event from the viewer (and possibly blocking for some real time before it becomes available). But check for viewer shutdown before resuming any suspended-but-ready Lua fiber.
2024-03-25Add LL. prefix to viewer entry points, fix existing references.Nat Goodspeed
2024-03-25Update test scripts to call leap.request() from main threadMnikolenko Productengine
2024-03-25Merge branch 'release/luau-scripting' into lua-keystrokeMaxim Nikolenko
2024-03-25Add keystroke event support and allow adding text lines to the line editorMnikolenko Productengine
2024-03-23Merge branch 'release/luau-scripting' of secondlife/viewer into lua-fiberNat Goodspeed
2024-03-24Introduce LLStreamListener: bundle LLEventStream+LLTempBoundListener.Nat Goodspeed
This is a very common pattern, especially in test code, but elsewhere in the viewer too. Use it in llluamanager_test.cpp.
2024-03-23Make leap.request() work even from Lua's main thread.Nat Goodspeed
Recast fiber.yield() as internal function scheduler(). Move fiber.run() after it so it can call scheduler() as a local function. Add new fiber.yield() that also calls scheduler(); the added value of this new fiber.yield() over plain scheduler() is that if scheduler() returns before the caller is ready (because the configured set_idle() function returned non-nil), it produces an explicit error rather than returning to its caller. So the caller can assume that when fiber.yield() returns normally, the calling fiber is ready. This allows any fiber, including the main thread, to call fiber.yield() or fiber.wait(). This supports using leap.request(), which posts a request and then waits on a WaitForReqid, which calls ErrorQueue:Dequeue(), which calls fiber.wait(). WaitQueue:_wake_waiters() must call fiber.status() instead of coroutine.status() so it understands the special token 'main'. Add a new llluamanager_test.cpp test to exercise calling leap.request() from Lua's main thread.
2024-03-22Fix a couple bugs in fiber.lua machinery.Nat Goodspeed
This fixes a hang if the Lua script explicitly calls fiber.run() before LuaState::expr()'s implicit fiber.run() call. Make fiber.run() remove the calling fiber from the ready list to avoid an infinite loop when all other fibers have terminated: "You're ready!" "Okay, yield()." "You're ready again!" ... But don't claim it's waiting, either, because then when all other fibers have terminated, we'd call idle() in the vain hope that something would make that one last fiber ready. WaitQueue:_wake_waiters() needs to wake waiting fibers if the queue's not empty OR it's been closed. Introduce leap.WaitFor:close() to close the queue gracefully so that a looping waiter can terminate, instead of using WaitFor:exception(), which stops the whole script once it propagates. Make leap's cleanup() function call close(). Streamline fiber.get_name() by using 'or' instead of if ... then. Streamline fiber.status() and fiber.set_waiting() by using table.find() instead of a loop.
2024-03-21Accept an array for "add_list_item" and change EVENT_LIST typeMnikolenko Productengine
2024-03-21Switch to LLDispatchListenerMnikolenko Productengine
2024-03-21WIP: Add fiber.lua module and use in leap.lua and WaitQueue.lua.Nat Goodspeed
fiber.lua goes beyond coro.lua in that it distinguishes ready suspended coroutines from waiting suspended coroutines, and presents a rudimentary scheduler in fiber.yield(). yield() can determine that when all coroutines are waiting, it's time to retrieve the next incoming event from the viewer. Moreover, it can detect when all coroutines have completed and exit without being explicitly told. fiber.launch() associates a name with each fiber for debugging purposes. fiber.get_name() retrieves the name of the specified fiber, or the running fiber. fiber.status() is like coroutine.status(), but can return 'ready' or 'waiting' instead of 'suspended'. fiber.yield() leaves the calling fiber ready, but lets other ready fibers run. fiber.wait() suspends the calling fiber and lets other ready fibers run. fiber.wake(), called from some other coroutine, returns the passed fiber to ready status for a future call to fiber.yield(). fiber.run() drives the scheduler to run all fibers to completion. If, on completion of the subject Lua script, LuaState::expr() detects that the script loaded fiber.lua, it calls fiber.run() to finish running any dangling fibers. This lets a script make calls to fiber.launch() and then just fall off the end, leaving the implicit fiber.run() call to run them all. fiber.lua is designed to allow the main thread, as well as explicitly launched coroutines, to make leap.request() calls. This part still needs debugging. The leap.lua module now configures a fiber.set_idle() function that honors leap.done(), but calls get_event_next() and dispatches the next incoming event. leap.request() and generate() now leave the reqid stamp in the response. This lets a caller handle subsequent events with the same reqid, e.g. for LLLuaFloater. Remove leap.process(): it has been superseded by fiber.run(). Remove leap.WaitFor:iterate(): unfortunately that would run afoul of the Luau bug that prevents suspending the calling coroutine within a generic 'for' iterator function. Make leap.lua use weak tables to track WaitFor objects. Make WaitQueue:Dequeue() call fiber.wait() to suspend its caller when the queue is empty, and Enqueue() call fiber.wake() to set it ready again when a new item is pushed. Make llluamanager_test.cpp's leap test script use the fiber module to launch coroutines, instead of the coro module. Fix a bug in which its drain() function was inadvertently setting and testing the global 'item' variable instead of one local to the function. Since some other modules had the same bug, it was getting confused. Also add printf.lua, providing a printf() function. printf() is short for print(string.format()), but it can also print tables: anything not a number or string is formatted using the inspect() function. Clean up some LL_DEBUGS() output left over from debugging lua_tollsd().
2024-03-20LLLuaFloater code clean upMnikolenko Productengine
2024-03-19search xml file in the lib, if path is not full; add test lua floater scriptsMnikolenko Productengine
2024-03-14Add preliminary Lua viewer API modules, with test scripts.Nat Goodspeed
2024-03-14Fix a bug in leap.generate().Nat Goodspeed
We weren't passing the WaitForReqid instance to WaitForReqid:wait(). Also remove 'reqid' from responses returned by leap.request() and generate().
2024-03-13Add tests for leap.request(). Use new coro.lua module.Nat Goodspeed
request() test ensures that the response for a given reqid is routed to the correct coroutine even when responses arrive out of order.
2024-03-13util.join() is unnecessary: luau provides table.concat().Nat Goodspeed
2024-03-13Fix minor bugs. Sprinkle in commented-out diagnostic output.Nat Goodspeed
2024-03-13Introduce a resume() wrapper to surface coroutine errors.Nat Goodspeed
2024-03-13Make a coro.resume() wrapper and use in coro.launch(), coro.yield().Nat Goodspeed
coro.resume() checks the ok boolean returned by coroutine.resume() and, if not ok, propagates the error. This avoids coroutine errors getting swallowed.
2024-03-11Add coro.lua to aggregate created coroutines.Nat Goodspeed
2024-03-11Lua already has a conventional cheap test for empty table.Nat Goodspeed
2024-03-11Add llluamanager_test test exercising leap.WaitFor.Nat Goodspeed
2024-03-11Polish up leap.lua to make it pass tests.Nat Goodspeed
Add usage comments at the top. Add leap.done() function. Make leap.process() honor leap.done(), also recognize an incoming nil from the viewer to mean it's all done. Support leap.WaitFor with nil priority to mean "don't self-enable." This obviates leap.WaitForReqid:enable() and disable() overrides that do nothing. Add diagnostic logging.
2024-03-11Make WaitQueue:_wait_waiters() skip dead coroutines.Nat Goodspeed
That is, skip coroutines that have gone dead since they decided to wait on Dequeue().
2024-03-08Merge 'release/luau-scripting' into lua-leap for Emoji release.Nat Goodspeed
2024-03-08Merge branch 'main' into release/luau-scripting for Emoji release.Nat Goodspeed
2024-03-08Enhance llluamanager_test.cpp.Nat Goodspeed
Sketch in an initial test that requires one of our bundled Lua modules. Each time we run Lua, report any error returned by the Lua engine. Use llcoro::suspendUntilEventOn(LLEventMailDrop) as shorthand for initializing an explicit LLTempBoundListener with a listen() call with a lambda.
2024-03-08Allow build-time Lua tests to require() bundled Lua modules.Nat Goodspeed
2024-03-07Finish adding leap.WaitFor and WaitForReqid. Untested.Nat Goodspeed
2024-03-07Finish WaitQueue, ErrorQueue; add util.count(), join(); extend qtest.Nat Goodspeed
For WaitQueue, nail down the mechanism for declaring a subclass and for calling a base-class method from a subclass override. Break out new _wake_waiters() method from Enqueue(): we need to do the same from close(), in case there are waiting consumers. Also, in Lua, 0 is not false. Instead of bundling a normal/error flag with every queued value, make ErrorQueue overload its _closed attribute. Once you call ErrorQueue:Error(), every subsequent Dequeue() call by any consumer will re-raise the same error. util.count() literally counts entries in a table, since #t is documented to be unreliable. (If you create a list with 5 entries and delete the middle one, #t might return 2 or it might return 5, but it won't return 4.) util.join() fixes a curious omission from Luau's string library: like Python's str.join(), it concatenates all the strings from a list with an optional separator. We assume that incrementally building a list of strings and then doing a single allocation for the desired result string is cheaper than reallocating each of a sequence of partial concatenated results. Add qtest test that posts individual items to a WaitQueue, waking waiting consumers to retrieve the next available result. Add test proving that calling ErrorQueue:Error() propagates the error to all consumers.