Age | Commit message (Collapse) | Author |
|
|
|
|
|
# Conflicts:
# autobuild.xml
# indra/llui/llfolderviewmodel.h
# indra/newview/lltexturecache.cpp
# indra/newview/llviewermenu.h
# indra/newview/skins/default/xui/en/menu_wearable_list_item.xml
|
|
|
|
Convoluted due to multiple workarounds. Might be a good idea to spend some time refactoring this, but for now just trottled checks.
|
|
|
|
|
|
# Conflicts:
# indra/newview/llgroupmgr.cpp
|
|
|
|
|
|
|
|
Coroprosedure should stop on 'stop' exception
|
|
# Conflicts:
# indra/newview/llimprocessing.cpp
# indra/newview/llviewerjoystick.cpp
# indra/newview/llviewermenufile.cpp
|
|
|
|
|
|
|
|
|
|
# Conflicts:
# indra/cmake/DirectX.cmake
# indra/newview/llviewerparcelmedia.cpp
# indra/newview/viewer_manifest.py
|
|
|
|
|
|
|
|
to integer overflow: const LLExtStat LL_EXSTAT_RES_RESULT = 2L<<30; const LLExtStat LL_EXSTAT_VFS_RESULT = 3L<<30; This shifts into the sign bit and clang gets (rightfully) upset about this.
LLExtStatus needs to be at least of type U32 to remedy this problem, but
while at it it makes sense to turn it into what it is: An enum. Turning
it into a class enum has the added benefit we get type safety for mostly
free.
Which incidentally turned up a problem right away:
A call to removeAndCallbackPendingDownloads had status and extstatus
reversed and thus was wrong.
|
|
# Conflicts:
# indra/llcommon/llerror.cpp
# indra/newview/llappviewerwin32.cpp
# indra/newview/llimprocessing.cpp
# indra/newview/llviewerjoystick.cpp
|
|
|
|
|
|
Enable the body of the existing ll_debug_socket() function (on Mac as well as
Linux), but using tag "Socket" so you can turn on its log messages without
emitting *all* debug messages.
|
|
|
|
|
|
Making coproc scoped to the for loop will make sure the destructor gets
called every loop iteration. Keeping it's scope outside the for loop
means the pointer keeps valid till the next assigment that happens
inside pop_wait_for when it gets assigned a new value.
Triggering the dtor inside pop_wait_for can lead to deadlock when inside
the dtor a coroutine tries to call enqueueCoprocedure (this happens).
enqueueCoprocedure then will try to grab the lock for try_push but this
lock is still held by pop_wait_for.
|
|
|
|
This reverts commit bf8aea5059f127dcce2fdf613d62c253bb3fa8fd.
Try boost::fibers::buffered_channel again with Boost 1.72.
|
|
The observed crash was due to sharing a stateful global resource (the global
LLMessageSystem instance) between different tasks. Specifically, a coroutine
sets its mMessageReader one way, expecting that value to persist until it's
done with message parsing, but another coroutine sneaks in at a suspension
point and sets it differently.
Introduce LockMessageReader and LockMessageChecker classes, which must be
instantiated by a consumer of the resource. The constructor of each locks a
coroutine-aware mutex, so that for the lifetime of the lock object no other
coroutine can instantiate another.
Refactor the code so that LLMessageSystem::mMessageReader can only be modified
by LockMessageReader, not by direct assignment. mMessageReader is now an
instance of LLMessageReaderPointer, which supports dereferencing and
comparison but not assignment. Only LockMessageReader can change its value.
LockMessageReader addresses the use case in which the specific mMessageReader
value need only persist for the duration of a single method call. Add an
instance in LLMessageHandlerBridge::post().
LockMessageChecker is a subclass of LockMessageReader: both lock the same
mutex. LockMessageChecker addresses the use case in which the specific
mMessageReader value must persist across multiple method calls. Modify the
methods in question to require a LockMessageChecker instance. Provide
LockMessageChecker forwarding methods to facilitate calling the underlying
LLMessageSystem methods via the LockMessageChecker instance.
Add LockMessageChecker instances to LLAppViewer::idleNetwork(), a couple cases
in idle_startup() and LLMessageSystem::establishBidirectionalTrust().
|
|
|
|
# Conflicts:
# indra/newview/llinventorybridge.cpp
# indra/newview/llinventorypanel.cpp
# indra/newview/lltexturectrl.cpp
# indra/newview/skins/default/xui/de/floater_texture_ctrl.xml
# indra/newview/skins/default/xui/es/floater_texture_ctrl.xml
# indra/newview/skins/default/xui/fr/floater_texture_ctrl.xml
# indra/newview/skins/default/xui/it/floater_texture_ctrl.xml
# indra/newview/skins/default/xui/ja/floater_texture_ctrl.xml
# indra/newview/skins/default/xui/pt/floater_texture_ctrl.xml
# indra/newview/skins/default/xui/ru/floater_texture_ctrl.xml
# indra/newview/skins/default/xui/tr/floater_texture_ctrl.xml
# indra/newview/skins/default/xui/zh/floater_texture_ctrl.xml
|
|
# Conflicts:
# indra/newview/pipeline.cpp
|
|
The tactic of pushing an empty QueuedCoproc::ptr_t to signal coprocedure close
only works for LLCoprocedurePools with a single coprocedure (e.g. "Upload" and
"AIS"). Only one coprocedureInvokerCoro() coroutine will pop that empty
pointer and shut down properly -- the rest will continue waiting indefinitely.
Rather than pushing some number of empty pointers, hopefully enough to notify
all consumer coroutines, close() the queue. That will notify as many consumers
as there may be.
That means catching LLThreadSafeQueueInterrupt from popBack(), instead of
detecting empty pointer.
Also, if a queued coprocedure throws an exception, coprocedureInvokerCoro()
logs it as before -- but instead of rethrowing it, the coroutine now loops
back to wait for more work. Otherwise, the number of coroutines servicing the
queue dwindles.
|
|
The new LLCoros::Stop exception is intended to terminate long-lived coroutines
-- not interrupt mainstream shutdown processing. Only throw it on an
explicitly-launched coroutine.
Make LLCoros::getName() (used by the above test) static. As with other LLCoros
methods, it might be called after the LLCoros LLSingleton instance has been
deleted. Requiring the caller to call instance() implies a possible need to
also call wasDeleted(). Encapsulate that nuance into a static method instead.
|
|
The new close(void) method simply acquires the logic from
~LLCoprocedureManager() (which now calls close()). It's useful, even if only
in test programs, to be able to shut down all existing LLCoprocedurePools
without having to name them individually -- and without having to destroy the
LLCoprocedureManager singleton instance. Deleting an LLSingleton should be
done only once per process, whereas test programs want to reset the
LLCoprocedureManager after each test.
|
|
We've observed buffered_channel::try_push() hanging, which seems very odd. Try
our own LLThreadSafeQueue instead.
|
|
Reinstate LLCoprocedureManager::countPending() and count() methods. These were
removed because boost::fibers::buffered_channel has no size() method, but
since all users run within a single thread, it works to increment and
decrement a simple counter.
Add count information and max queue size to log messages.
|
|
Since the consuming coroutine LLCoprocedurePool::coprocedureInvokerCoro() has
been observed to outlive the LLCoprocedurePool instance that owns the
CoprocQueue_t, closing that queue isn't enough to keep the coroutine from
crashing at shutdown: accessing a deleted CoprocQueue_t is fatal whether or
not it's been closed.
Make LLCoprocedurePool store a shared_ptr to a heap CoprocQueue_t instance,
and pass that shared_ptr by value to consuming coroutines. That way the
CoprocQueue_t instance is guaranteed to live as long as the last interested
party.
|
|
|
|
By the time "LLApp" listeners are notified that the app is quitting, the
mainloop is no longer running. Even though those listeners do things like
close work queues and inject exceptions into pending promises, any coroutines
waiting on those resources must regain control before they can notice and shut
down properly. Add a final "LLApp" listener that resumes ready coroutines a
few more times.
Make sure every other "LLApp" listener is positioned before that new one.
|
|
Add LLCoros::TempStatus instances around known suspension points so
printActiveCoroutines() can report what each suspended coroutine is waiting
for.
Similarly, sprinkle checkStop() calls at known suspension points.
Make LLApp::setStatus() post an event to a new LLEventPump "LLApp" with a
string corresponding to the status value being set, but only until
~LLEventPumps() -- since setStatus() also gets called very late in the
application's lifetime.
Make postAndSuspendSetup() (used by postAndSuspend(), suspendUntilEventOn(),
postAndSuspendWithTimeout(), suspendUntilEventOnWithTimeout()) add a listener
on the new "LLApp" LLEventPump that pushes the new LLCoros::Stopping exception
to the coroutine waiting on the LLCoros::Promise. Make it return the new
LLBoundListener along with the previous one.
Accordingly, make postAndSuspend() and postAndSuspendWithTimeout() store the
new LLBoundListener returned by postAndSuspendSetup() in a LLTempBoundListener
(as with the previous one) so it will automatically disconnect once the wait
is over.
Make each LLCoprocedurePool instance listen on "LLApp" with a listener that
closes the queue on which new work items are dispatched. Closing the queue
causes the waiting dispatch coroutine to terminate. Store the connection in an
LLTempBoundListener on the LLCoprocedurePool so it will disconnect
automatically on destruction.
Refactor the loop in coprocedureInvokerCoro() to instantiate TempStatus around
the suspending call.
Change a couple spammy LL_INFOS() calls to LL_DEBUGS(). Give all logging calls
in that module a "CoProcMgr" tag to make it straightforward to re-enable the
LL_DEBUGS() calls as desired.
|
|
Use new Sync class to make the driving logic wait for the coprocedure to run.
|
|
|
|
|
|
|
|
Using boost::fibers::unbuffered_channel can block the mainthread when calling mPendingCoprocs.push (LLCoprocedurePool::enqueueCoprocedure)
From the documentation:
- If a fiber attempts to send a value through an unbuffered channel and no fiber is waiting to receive the value, the channel will block the sending fiber.
This can happen if LLCoprocedurePool::coprocedureInvokerCoro is running a coroutine and this coroutine calls yield, resuming the viewers main loop. If inside
the main loop someone calls LLCoprocedurePool::enqueueCoprocedure now push will block, as there's no one waiting for a result right now.
The wait would be in LLCoprocedurePool::coprocedureInvokerCoro at the start of the while loop, but we have not reached that yet again as LLCoprocedurePool::coprocedureInvokerCoro
did yield before reaching pop_wait_for.
The result is a deadlock.
boost::fibers::buffered_channel will not block as long as there's space in the channel. A size of 4096 (DEFAULT_QUEUE_SIZE) should be plenty enough for this.
|
|
|