Age | Commit message (Collapse) | Author |
|
|
|
|
|
This is a very common pattern, especially in test code, but elsewhere in the
viewer too.
Use it in llluamanager_test.cpp.
|
|
Recast fiber.yield() as internal function scheduler().
Move fiber.run() after it so it can call scheduler() as a local function.
Add new fiber.yield() that also calls scheduler(); the added value of this new
fiber.yield() over plain scheduler() is that if scheduler() returns before the
caller is ready (because the configured set_idle() function returned non-nil),
it produces an explicit error rather than returning to its caller. So the
caller can assume that when fiber.yield() returns normally, the calling fiber
is ready.
This allows any fiber, including the main thread, to call fiber.yield() or
fiber.wait(). This supports using leap.request(), which posts a request and
then waits on a WaitForReqid, which calls ErrorQueue:Dequeue(), which calls
fiber.wait().
WaitQueue:_wake_waiters() must call fiber.status() instead of
coroutine.status() so it understands the special token 'main'.
Add a new llluamanager_test.cpp test to exercise calling leap.request() from
Lua's main thread.
|
|
|
|
Initial implementation of LLLuaFloater
|
|
This fixes a hang if the Lua script explicitly calls fiber.run() before
LuaState::expr()'s implicit fiber.run() call.
Make fiber.run() remove the calling fiber from the ready list to avoid an
infinite loop when all other fibers have terminated: "You're ready!" "Okay,
yield()." "You're ready again!" ... But don't claim it's waiting, either,
because then when all other fibers have terminated, we'd call idle() in the
vain hope that something would make that one last fiber ready.
WaitQueue:_wake_waiters() needs to wake waiting fibers if the queue's not
empty OR it's been closed.
Introduce leap.WaitFor:close() to close the queue gracefully so that a looping
waiter can terminate, instead of using WaitFor:exception(), which stops the
whole script once it propagates. Make leap's cleanup() function call close().
Streamline fiber.get_name() by using 'or' instead of if ... then.
Streamline fiber.status() and fiber.set_waiting() by using table.find()
instead of a loop.
|
|
|
|
|
|
|
|
fiber.lua goes beyond coro.lua in that it distinguishes ready suspended
coroutines from waiting suspended coroutines, and presents a rudimentary
scheduler in fiber.yield(). yield() can determine that when all coroutines are
waiting, it's time to retrieve the next incoming event from the viewer.
Moreover, it can detect when all coroutines have completed and exit without
being explicitly told.
fiber.launch() associates a name with each fiber for debugging purposes.
fiber.get_name() retrieves the name of the specified fiber, or the running fiber.
fiber.status() is like coroutine.status(), but can return 'ready' or 'waiting'
instead of 'suspended'.
fiber.yield() leaves the calling fiber ready, but lets other ready fibers run.
fiber.wait() suspends the calling fiber and lets other ready fibers run.
fiber.wake(), called from some other coroutine, returns the passed fiber to
ready status for a future call to fiber.yield().
fiber.run() drives the scheduler to run all fibers to completion.
If, on completion of the subject Lua script, LuaState::expr() detects that the
script loaded fiber.lua, it calls fiber.run() to finish running any dangling
fibers. This lets a script make calls to fiber.launch() and then just fall off
the end, leaving the implicit fiber.run() call to run them all.
fiber.lua is designed to allow the main thread, as well as explicitly launched
coroutines, to make leap.request() calls. This part still needs debugging.
The leap.lua module now configures a fiber.set_idle() function that honors
leap.done(), but calls get_event_next() and dispatches the next incoming event.
leap.request() and generate() now leave the reqid stamp in the response. This
lets a caller handle subsequent events with the same reqid, e.g. for
LLLuaFloater.
Remove leap.process(): it has been superseded by fiber.run().
Remove leap.WaitFor:iterate(): unfortunately that would run afoul of the Luau
bug that prevents suspending the calling coroutine within a generic 'for'
iterator function.
Make leap.lua use weak tables to track WaitFor objects.
Make WaitQueue:Dequeue() call fiber.wait() to suspend its caller when the queue
is empty, and Enqueue() call fiber.wake() to set it ready again when a new
item is pushed.
Make llluamanager_test.cpp's leap test script use the fiber module to launch
coroutines, instead of the coro module. Fix a bug in which its drain()
function was inadvertently setting and testing the global 'item' variable
instead of one local to the function. Since some other modules had the same
bug, it was getting confused.
Also add printf.lua, providing a printf() function. printf() is short for
print(string.format()), but it can also print tables: anything not a number or
string is formatted using the inspect() function.
Clean up some LL_DEBUGS() output left over from debugging lua_tollsd().
|
|
|
|
|
|
|
|
|
|
Add leap.lua module to mediate LEAP request/response viewer interactions.
|
|
|
|
|
|
We weren't passing the WaitForReqid instance to WaitForReqid:wait().
Also remove 'reqid' from responses returned by leap.request() and generate().
|
|
|
|
request() test ensures that the response for a given reqid is routed to the
correct coroutine even when responses arrive out of order.
|
|
|
|
|
|
|
|
coro.resume() checks the ok boolean returned by coroutine.resume() and, if not
ok, propagates the error. This avoids coroutine errors getting swallowed.
|
|
|
|
|
|
|
|
|
|
Add usage comments at the top.
Add leap.done() function.
Make leap.process() honor leap.done(), also recognize an incoming nil from the
viewer to mean it's all done.
Support leap.WaitFor with nil priority to mean "don't self-enable." This
obviates leap.WaitForReqid:enable() and disable() overrides that do nothing.
Add diagnostic logging.
|
|
That is, skip coroutines that have gone dead since they decided to wait on
Dequeue().
|
|
|
|
|
|
Sketch in an initial test that requires one of our bundled Lua modules.
Each time we run Lua, report any error returned by the Lua engine.
Use llcoro::suspendUntilEventOn(LLEventMailDrop) as shorthand for initializing
an explicit LLTempBoundListener with a listen() call with a lambda.
|
|
|
|
This helps to explain the lengthy delay when running autobuild configure in a
new developer work area.
|
|
Make signing and symbol posting jobs conditional on secrets.
|
|
|
|
For WaitQueue, nail down the mechanism for declaring a subclass and for
calling a base-class method from a subclass override. Break out new
_wake_waiters() method from Enqueue(): we need to do the same from close(), in
case there are waiting consumers. Also, in Lua, 0 is not false.
Instead of bundling a normal/error flag with every queued value, make
ErrorQueue overload its _closed attribute. Once you call ErrorQueue:Error(),
every subsequent Dequeue() call by any consumer will re-raise the same error.
util.count() literally counts entries in a table, since #t is documented to be
unreliable. (If you create a list with 5 entries and delete the middle one, #t
might return 2 or it might return 5, but it won't return 4.)
util.join() fixes a curious omission from Luau's string library: like Python's
str.join(), it concatenates all the strings from a list with an optional
separator. We assume that incrementally building a list of strings and then
doing a single allocation for the desired result string is cheaper than
reallocating each of a sequence of partial concatenated results.
Add qtest test that posts individual items to a WaitQueue, waking waiting
consumers to retrieve the next available result. Add test proving that calling
ErrorQueue:Error() propagates the error to all consumers.
|
|
Also qtest.lua to exercise the queue classes and inspect.lua (from
https://github.com/kikito/inspect.lua) for debugging.
|
|
This is an unusual use case in which lua_tollsd() is called by C++ code
without the Lua runtime farther up the call stack.
|
|
The build step no longer needs these variables at all: they're used in a
subsequent workflow job.
|
|
From https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#using-secrets-in-a-workflow :
"Secrets cannot be directly referenced in if: conditionals. Instead, consider
setting secrets as job-level environment variables, then referencing the
environment variables to conditionally run steps in the job."
|
|
The previous construct produced:
Unrecognized named-value: 'secrets'. Located at position 1 within expression:
secrets.AZURE_KEY_VAULT_URI && ...
|
|
Specifically, when secrets aren't available (e.g. for external PRs), skip the
affected steps.
|
|
Mark issues as stale but do not close them.
|
|
following promotion of secondlife/viewer #673
|
|
|
|
Refactor `require()` to make it easier to reason about Lua stack usage.
|
|
Add Queue.lua from roblox.com documentation.
|