Age | Commit message (Collapse) | Author |
|
Make LuaListener listen for "LLApp" viewer shutdown events. On receiving such,
it closes its queue. Then the C++ coroutine calling getNext() wakes up with an
LLThreadSafeQueue exception, and calls LLCoros::checkStop() to throw one of
the exceptions recognized by LLCoros::toplevel().
Add an llluamanager_test.cpp test to verify this behavior.
|
|
|
|
|
|
|
|
Don't use "debug" as the name of a function to conditionally write debug
messages: "debug" is a Luau built-in library, and assigning that name locally
would shadow the builtin. Use "dbg" instead.
Recast fiber.print_all() as fiber.format_all() that returns a string; then
print_all() is simply print(format_all()). This refactoring allows us to use
dbg(format_all()) as well.
Add a couple new dbg() messages at fiber state changes.
|
|
|
|
The problem with running a `require()` module on a Lua coroutine is that it
prohibits calling `leap.request()` at module load time. When a coroutine calls
`leap.request()`, it must yield back to Lua's main thread -- but a `require()`
module is forbidden from yielding.
Running on Lua's main thread means that (after potentially giving time slices
to other ready coroutines) `fiber.lua` will request the response event from
the viewer, and continue processing the loaded module without having to yield.
|
|
|
|
# Conflicts:
# .github/workflows/build.yaml
|
|
following promotion of secondlife/viewer #650
|
|
|
|
from below
|
|
|
|
Also streamline util.contains(), given table.find().
|
|
fiber.lua's scheduler() is greedy, in the sense that it wants to run every
ready Lua fiber before retrieving the next incoming event from the viewer (and
possibly blocking for some real time before it becomes available). But check
for viewer shutdown before resuming any suspended-but-ready Lua fiber.
|
|
|
|
|
|
|
|
|
|
|
|
This is a very common pattern, especially in test code, but elsewhere in the
viewer too.
Use it in llluamanager_test.cpp.
|
|
Recast fiber.yield() as internal function scheduler().
Move fiber.run() after it so it can call scheduler() as a local function.
Add new fiber.yield() that also calls scheduler(); the added value of this new
fiber.yield() over plain scheduler() is that if scheduler() returns before the
caller is ready (because the configured set_idle() function returned non-nil),
it produces an explicit error rather than returning to its caller. So the
caller can assume that when fiber.yield() returns normally, the calling fiber
is ready.
This allows any fiber, including the main thread, to call fiber.yield() or
fiber.wait(). This supports using leap.request(), which posts a request and
then waits on a WaitForReqid, which calls ErrorQueue:Dequeue(), which calls
fiber.wait().
WaitQueue:_wake_waiters() must call fiber.status() instead of
coroutine.status() so it understands the special token 'main'.
Add a new llluamanager_test.cpp test to exercise calling leap.request() from
Lua's main thread.
|
|
|
|
This fixes a hang if the Lua script explicitly calls fiber.run() before
LuaState::expr()'s implicit fiber.run() call.
Make fiber.run() remove the calling fiber from the ready list to avoid an
infinite loop when all other fibers have terminated: "You're ready!" "Okay,
yield()." "You're ready again!" ... But don't claim it's waiting, either,
because then when all other fibers have terminated, we'd call idle() in the
vain hope that something would make that one last fiber ready.
WaitQueue:_wake_waiters() needs to wake waiting fibers if the queue's not
empty OR it's been closed.
Introduce leap.WaitFor:close() to close the queue gracefully so that a looping
waiter can terminate, instead of using WaitFor:exception(), which stops the
whole script once it propagates. Make leap's cleanup() function call close().
Streamline fiber.get_name() by using 'or' instead of if ... then.
Streamline fiber.status() and fiber.set_waiting() by using table.find()
instead of a loop.
|
|
|
|
|
|
fiber.lua goes beyond coro.lua in that it distinguishes ready suspended
coroutines from waiting suspended coroutines, and presents a rudimentary
scheduler in fiber.yield(). yield() can determine that when all coroutines are
waiting, it's time to retrieve the next incoming event from the viewer.
Moreover, it can detect when all coroutines have completed and exit without
being explicitly told.
fiber.launch() associates a name with each fiber for debugging purposes.
fiber.get_name() retrieves the name of the specified fiber, or the running fiber.
fiber.status() is like coroutine.status(), but can return 'ready' or 'waiting'
instead of 'suspended'.
fiber.yield() leaves the calling fiber ready, but lets other ready fibers run.
fiber.wait() suspends the calling fiber and lets other ready fibers run.
fiber.wake(), called from some other coroutine, returns the passed fiber to
ready status for a future call to fiber.yield().
fiber.run() drives the scheduler to run all fibers to completion.
If, on completion of the subject Lua script, LuaState::expr() detects that the
script loaded fiber.lua, it calls fiber.run() to finish running any dangling
fibers. This lets a script make calls to fiber.launch() and then just fall off
the end, leaving the implicit fiber.run() call to run them all.
fiber.lua is designed to allow the main thread, as well as explicitly launched
coroutines, to make leap.request() calls. This part still needs debugging.
The leap.lua module now configures a fiber.set_idle() function that honors
leap.done(), but calls get_event_next() and dispatches the next incoming event.
leap.request() and generate() now leave the reqid stamp in the response. This
lets a caller handle subsequent events with the same reqid, e.g. for
LLLuaFloater.
Remove leap.process(): it has been superseded by fiber.run().
Remove leap.WaitFor:iterate(): unfortunately that would run afoul of the Luau
bug that prevents suspending the calling coroutine within a generic 'for'
iterator function.
Make leap.lua use weak tables to track WaitFor objects.
Make WaitQueue:Dequeue() call fiber.wait() to suspend its caller when the queue
is empty, and Enqueue() call fiber.wake() to set it ready again when a new
item is pushed.
Make llluamanager_test.cpp's leap test script use the fiber module to launch
coroutines, instead of the coro module. Fix a bug in which its drain()
function was inadvertently setting and testing the global 'item' variable
instead of one local to the function. Since some other modules had the same
bug, it was getting confused.
Also add printf.lua, providing a printf() function. printf() is short for
print(string.format()), but it can also print tables: anything not a number or
string is formatted using the inspect() function.
Clean up some LL_DEBUGS() output left over from debugging lua_tollsd().
|
|
|
|
|
|
|
|
We weren't passing the WaitForReqid instance to WaitForReqid:wait().
Also remove 'reqid' from responses returned by leap.request() and generate().
|
|
request() test ensures that the response for a given reqid is routed to the
correct coroutine even when responses arrive out of order.
|
|
|
|
|
|
|
|
coro.resume() checks the ok boolean returned by coroutine.resume() and, if not
ok, propagates the error. This avoids coroutine errors getting swallowed.
|
|
|
|
|
|
|
|
Add usage comments at the top.
Add leap.done() function.
Make leap.process() honor leap.done(), also recognize an incoming nil from the
viewer to mean it's all done.
Support leap.WaitFor with nil priority to mean "don't self-enable." This
obviates leap.WaitForReqid:enable() and disable() overrides that do nothing.
Add diagnostic logging.
|
|
That is, skip coroutines that have gone dead since they decided to wait on
Dequeue().
|
|
|
|
|
|
Sketch in an initial test that requires one of our bundled Lua modules.
Each time we run Lua, report any error returned by the Lua engine.
Use llcoro::suspendUntilEventOn(LLEventMailDrop) as shorthand for initializing
an explicit LLTempBoundListener with a listen() call with a lambda.
|
|
|
|
|
|
For WaitQueue, nail down the mechanism for declaring a subclass and for
calling a base-class method from a subclass override. Break out new
_wake_waiters() method from Enqueue(): we need to do the same from close(), in
case there are waiting consumers. Also, in Lua, 0 is not false.
Instead of bundling a normal/error flag with every queued value, make
ErrorQueue overload its _closed attribute. Once you call ErrorQueue:Error(),
every subsequent Dequeue() call by any consumer will re-raise the same error.
util.count() literally counts entries in a table, since #t is documented to be
unreliable. (If you create a list with 5 entries and delete the middle one, #t
might return 2 or it might return 5, but it won't return 4.)
util.join() fixes a curious omission from Luau's string library: like Python's
str.join(), it concatenates all the strings from a list with an optional
separator. We assume that incrementally building a list of strings and then
doing a single allocation for the desired result string is cheaper than
reallocating each of a sequence of partial concatenated results.
Add qtest test that posts individual items to a WaitQueue, waking waiting
consumers to retrieve the next available result. Add test proving that calling
ErrorQueue:Error() propagates the error to all consumers.
|
|
Also qtest.lua to exercise the queue classes and inspect.lua (from
https://github.com/kikito/inspect.lua) for debugging.
|
|
|
|
Under debug LL_ERRS will show a message as well, but release won't show
anything and will quit silently so show a notification when applicable.
|