diff options
author | Nat Goodspeed <nat@lindenlab.com> | 2024-05-07 16:13:03 -0400 |
---|---|---|
committer | Nat Goodspeed <nat@lindenlab.com> | 2024-05-07 16:13:03 -0400 |
commit | d4f384b4ec55758a7f2ca4338894ea6cacc98eec (patch) | |
tree | 0d9f42611023e7289af64415f0e1580e76692303 /indra | |
parent | c89d51c60a6f42a5e279e2c9e06adcf1f13822c0 (diff) |
Refactor LLLater -> LL::Timers to accommodate nonpositive repeats.
In the previous design, the tick() method ran each task exactly once.
doPeriodically() was implemented by posting a functor that would, after
calling the specified callable, repost itself at (timestamp + interval).
The trouble with that design is that it required (interval > 0). A nonpositive
interval would result in looping over any timers with nonpositive repetition
intervals without ever returning from the tick() method.
To avoid that, doPeriodically() contained an llassert(interval > 0).
Unfortunately the viewer failed that constraint immediately at login, leading
to the suspicion that eliminating every such usage might require a protracted
search.
Lifting that restriction required a redesign. Now the priority queue stores a
callable returning bool, and the tick() method itself contains the logic to
repost a recurring task -- but defers doing so until after it stops looping
over ready tasks, ensuring that a task with a nonpositive interval will at
least wait until the following tick() call.
This simplifies not only doPeriodically(), but also doAtTime(). The previous
split of doAtTime() into doAtTime1() and doAtTime2() was only to accommodate
the needs of the Periodic functor class. Ditch Periodic.
Per feedback from NickyD, rename doAtTime() to scheduleAt(), which wraps its
passed nullary callable into a callable that unconditionally returns true (so
tick() will run it only once).
Rename the doAfterInterval() method to scheduleAfter(), which similarly wraps
its nullary callable. However, the legacy doAfterInterval() free function
remains. scheduleAfter() also loses its llassert(seconds > 0).
Rename the doPeriodically() method to scheduleRepeating(). However, the legacy
doPeriodically() free function remains.
Add internal scheduleAtRepeating(), whose role is to accept both a specific
timestamp and a repetition interval (which might be ignored, depending on the
callable). scheduleAtRepeating() now contains the real logic to add a task.
Rename getRemaining() to timeUntilCall(), hopefully resolving the question of
"remaining what?"
Expand the std::pair metadata stored in Timers's auxiliary unordered_map to a
Metadata struct containing the repetition interval plus two bools to mediate
deferred cancel() processing. Rename HandleMap to MetaMap, mHandles to mMeta.
Defend against the case when cancel(handle) is reached during the call to that
handle's callable. Meta::mRunning is set for the duration of that call. When
cancel() sees mRunning, instead of immediately deleting map entries, it sets
mCancel. Upon return from a task's callable, tick() notices mCancel and
behaves as if the callable returned true to stop the series of calls.
To guarantee that mRunning doesn't inadvertently remain set even in the case
of an exception, introduce local RAII class TempSet whose constructor accepts
a non-const variable reference and a desired value. The constructor captures
the current value and sets the desired value; the destructor restores the
previous value.
Defend against exception in a task's callable, and stop calling that task. Use
LOG_UNHANDLED_EXCEPTION() to report it.
Diffstat (limited to 'indra')
-rw-r--r-- | indra/llcommon/llcallbacklist.cpp | 266 | ||||
-rw-r--r-- | indra/llcommon/llcallbacklist.h | 94 | ||||
-rw-r--r-- | indra/llcommon/lleventfilter.cpp | 10 | ||||
-rw-r--r-- | indra/llcommon/lleventfilter.h | 4 | ||||
-rw-r--r-- | indra/llcommon/lleventtimer.cpp | 8 | ||||
-rw-r--r-- | indra/llcommon/lleventtimer.h | 2 |
6 files changed, 233 insertions, 151 deletions
diff --git a/indra/llcommon/llcallbacklist.cpp b/indra/llcommon/llcallbacklist.cpp index 992c83b4d2..52e9860e02 100644 --- a/indra/llcommon/llcallbacklist.cpp +++ b/indra/llcommon/llcallbacklist.cpp @@ -25,6 +25,8 @@ */ #include "llcallbacklist.h" +#include "llexception.h" +#include <vector> // // Member functions @@ -126,116 +128,83 @@ LLCallbackList::handle_t LLCallbackList::doOnIdleRepeating( const bool_func_t& f } /***************************************************************************** -* LLLater +* LL::Timers *****************************************************************************/ -LLLater::LLLater() {} - -LLLater::HandleMap::iterator LLLater::doAtTime1(LLDate::timestamp time) +namespace LL { - // Pick token FIRST to store a self-reference in mQueue's managed node as - // well as in mHandles. Pre-increment to distinguish 0 from any live - // handle_t. - token_t token{ ++mToken }; - // For the moment, store a default-constructed mQueue handle -- - // doAtTime2() will fill in. - auto [iter, inserted]{ mHandles.emplace( - token, - HandleMap::mapped_type{ queue_t::handle_type(), time }) }; - llassert(inserted); - return iter; -} -LLLater::handle_t LLLater::doAtTime2(nullary_func_t callable, HandleMap::iterator iter) -{ - bool first{ mQueue.empty() }; - // HandleMap::iterator references (token, (handle, time)) pair - auto handle{ mQueue.emplace(callable, iter->first, iter->second.second) }; - // Now that we have an mQueue handle_type, store it in mHandles entry. - iter->second.first = handle; - if (first && ! mLive.connected()) - { - // If this is our first entry, register for regular callbacks. - mLive = LLCallbackList::instance().doOnIdleRepeating([this]{ return tick(); }); - } - // Make an LLLater::handle_t from token. - return { iter->first }; -} +Timers::Timers() {} // Call a given callable once at specified timestamp. -LLLater::handle_t LLLater::doAtTime(nullary_func_t callable, LLDate::timestamp time) +Timers::handle_t Timers::scheduleAt(nullary_func_t callable, LLDate::timestamp time) { - return doAtTime2(callable, doAtTime1(time)); + // tick() assumes you want to run periodically until you return true. + // Schedule a task that returns true after a single call. + return scheduleAtRepeating(once(callable), time, 0); } // Call a given callable once after specified interval. -LLLater::handle_t LLLater::doAfterInterval(nullary_func_t callable, F32 seconds) +Timers::handle_t Timers::scheduleAfter(nullary_func_t callable, F32 seconds) { - // Passing 0 is a slightly more expensive way of calling - // LLCallbackList::doOnIdleOneTime(). Are we sure the caller is correct? - // (If there's a valid use case, remove the llassert() and carry on.) - llassert(seconds > 0); - return doAtTime(callable, LLDate::now().secondsSinceEpoch() + seconds); + return scheduleRepeating(once(callable), seconds); } -// For doPeriodically(), we need a struct rather than a lambda because a -// struct, unlike a lambda, has access to 'this'. -struct LLLater::Periodic +// Call a given callable every specified number of seconds, until it returns true. +Timers::handle_t Timers::scheduleRepeating(bool_func_t callable, F32 seconds) { - LLLater* mLater; - HandleMap::iterator mHandleEntry; - bool_func_t mCallable; - F32 mSeconds; + return scheduleAtRepeating(callable, now() + seconds, seconds); +} - void operator()() +Timers::handle_t Timers::scheduleAtRepeating(bool_func_t callable, + LLDate::timestamp time, F32 interval) +{ + // Pick token FIRST to store a self-reference in mQueue's managed node as + // well as in mMeta. Pre-increment to distinguish 0 from any live + // handle_t. + token_t token{ ++mToken }; + // For the moment, store a default-constructed mQueue handle -- + // we'll fill in later. + auto [iter, inserted] = mMeta.emplace(token, + Metadata{ queue_t::handle_type(), time, interval }); + // It's important that our token is unique. + llassert(inserted); + + // Remember whether this is the first entry in mQueue + bool first{ mQueue.empty() }; + auto handle{ mQueue.emplace(callable, token, time) }; + // Now that we have an mQueue handle_type, store it in mMeta entry. + iter->second.mHandle = handle; + if (first && ! mLive.connected()) { - if (! mCallable()) - { - // Returning false means please schedule another call. - // Don't call doAfterInterval(), which rereads LLDate::now(), - // since that would defer by however long it took us to wake - // up and notice plus however long callable() took to run. - // Bump the time in our mHandles entry so getRemaining() can see. - // HandleMap::iterator references (token, (handle, time)) pair. - mHandleEntry->second.second += mSeconds; - mLater->doAtTime2(*this, mHandleEntry); - } + // If this is our first entry, register for regular callbacks. + mLive = LLCallbackList::instance().doOnIdleRepeating([this]{ return tick(); }); } -}; - -// Call a given callable every specified number of seconds, until it returns true. -LLLater::handle_t LLLater::doPeriodically(bool_func_t callable, F32 seconds) -{ - // Passing seconds <= 0 will produce an infinite loop. - llassert(seconds > 0); - auto iter{ doAtTime1(LLDate::now().secondsSinceEpoch() + seconds) }; - // The whole reason we split doAtTime() into doAtTime1() and doAtTime2() - // is to be able to bind the mHandles entry into Periodic. - return doAtTime2(Periodic{ this, iter, callable, seconds }, iter); + // Make an Timers::handle_t from token. + return { token }; } -bool LLLater::isRunning(handle_t timer) const +bool Timers::isRunning(handle_t timer) const { // A default-constructed timer isn't running. - // A timer we don't find in mHandles has fired or been canceled. - return timer && mHandles.find(timer.token) != mHandles.end(); + // A timer we don't find in mMeta has fired or been canceled. + return timer && mMeta.find(timer.token) != mMeta.end(); } -F32 LLLater::getRemaining(handle_t timer) const +F32 Timers::timeUntilCall(handle_t timer) const { - auto found{ mHandles.find(timer.token) }; - if (found == mHandles.end()) + MetaMap::const_iterator found; + if ((! timer) || (found = mMeta.find(timer.token)) == mMeta.end()) { return 0.f; } else { - // HandleMap::iterator references (token, (handle, time)) pair - return found->second.second - LLDate::now().secondsSinceEpoch(); + return found->second.mTime - now(); } } -// Cancel a future timer set by doAtTime(), doAfterInterval(), doPeriodically() -bool LLLater::cancel(handle_t& timer) +// Cancel a future timer set by scheduleAt(), scheduleAfter(), scheduleRepeating() +bool Timers::cancel(handle_t& timer) { // For exception safety, capture and clear timer before canceling. // Once we've canceled this handle, don't retain the live handle. @@ -244,7 +213,7 @@ bool LLLater::cancel(handle_t& timer) return cancel(ctimer); } -bool LLLater::cancel(const handle_t& timer) +bool Timers::cancel(const handle_t& timer) { if (! timer) { @@ -257,27 +226,38 @@ bool LLLater::cancel(const handle_t& timer) // Nor do we find any documented way to ask whether a given handle still // tracks a valid heap node. That's why we capture all returned handles in - // mHandles and validate against that collection. What about the pop() + // mMeta and validate against that collection. What about the pop() // call in tick()? How to map from the top() value back to the // corresponding handle_t? That's why we store func_at::mToken. // fibonacci_heap provides a pair of begin()/end() methods to iterate over // all nodes (NOT in heap order), plus a function to convert from such - // iterators to handles. Without mHandles, that would be our only chance + // iterators to handles. Without mMeta, that would be our only chance // to validate. - auto found{ mHandles.find(timer.token) }; - if (found == mHandles.end()) + auto found{ mMeta.find(timer.token) }; + if (found == mMeta.end()) { // we don't recognize this handle -- maybe the timer has already // fired, maybe it was previously canceled. return false; } - // HandleMap::iterator references (token, (handle, time)) pair. + // Funny case: what if the callback directly or indirectly reaches a + // cancel() call for its own handle? + if (found->second.mRunning) + { + // tick() has special logic to defer the actual deletion until the + // callback has returned + found->second.mCancel = true; + // this handle does in fact reference a live timer, + // which we're going to cancel when we get a chance + return true; + } + // Erase from mQueue the handle_type referenced by timer.token. - mQueue.erase(found->second.first); - // before erasing timer.token from mHandles - mHandles.erase(found); + mQueue.erase(found->second.mHandle); + // before erasing the mMeta entry + mMeta.erase(found); if (mQueue.empty()) { // If that was the last active timer, unregister for callbacks. @@ -289,7 +269,33 @@ bool LLLater::cancel(const handle_t& timer) return true; } -bool LLLater::tick() +// RAII class to set specified variable to specified value +// only for the duration of containing scope +template <typename VAR, typename VALUE> +class TempSet +{ +public: + TempSet(VAR& var, const VALUE& value): + mVar(var), + mOldValue(mVar) + { + mVar = value; + } + + TempSet(const TempSet&) = delete; + TempSet& operator=(const TempSet&) = delete; + + ~TempSet() + { + mVar = mOldValue; + } + +private: + VAR& mVar; + VALUE mOldValue; +}; + +bool Timers::tick() { // Fetch current time only on entry, even though running some mQueue task // may take long enough that the next one after would become ready. We're @@ -297,34 +303,84 @@ bool LLLater::tick() // starve it if we have a sequence of tasks that take nontrivial time. auto now{ LLDate::now().secondsSinceEpoch() }; auto cutoff{ now + TIMESLICE }; + + // Capture tasks we've processed but that want to be rescheduled. + // Defer rescheduling them immediately to avoid getting stuck looping over + // a recurring task with a nonpositive interval. + std::vector<std::pair<MetaMap::iterator, func_at>> deferred; + while (! mQueue.empty()) { auto& top{ mQueue.top() }; if (top.mTime > now) { // we've hit an entry that's still in the future: - // done with this tick(), but schedule another call - return false; + // done with this tick() + break; } if (LLDate::now().secondsSinceEpoch() > cutoff) { // we still have ready tasks, but we've already eaten too much - // time this tick() -- defer until next tick() -- call again - return false; + // time this tick() -- defer until next tick() + break; } - // Found a ready task. Hate to copy stuff, but -- what if the task - // indirectly ends up trying to cancel a handle referencing its own - // node in mQueue? If the task has any state, that would be Bad. Copy - // the node before running it. - auto current{ top }; - // remove the mHandles entry referencing this task - mHandles.erase(current.mToken); - // before removing the mQueue task entry itself + // Found a ready task. Look up its corresponding mMeta entry. + auto meta{ mMeta.find(top.mToken) }; + llassert(meta != mMeta.end()); + bool done; + { + // Mark our mMeta entry so we don't cancel this timer while its + // callback is running, but unmark it even in case of exception. + TempSet running(meta->second.mRunning, true); + // run the callback and capture its desire to end repetition + try + { + done = top.mFunc(); + } + catch (...) + { + // Don't crash if a timer callable throws. + // But don't continue calling that callable, either. + done = true; + LOG_UNHANDLED_EXCEPTION("LL::Timers"); + } + } // clear mRunning + + // If mFunc() returned true (all done, stop calling me) or + // meta->mCancel (somebody tried to cancel this timer during the + // callback call), then we're done: clean up both entries. + if (done || meta->second.mCancel) + { + // remove the mMeta entry referencing this task + mMeta.erase(meta); + } + else + { + // mFunc returned false, and nobody asked to cancel: + // continue calling this task at a future time. + meta->second.mTime += meta->second.mInterval; + // capture this task to reschedule once we break loop + deferred.push_back({meta, top}); + // update func_at's mTime to match meta's + deferred.back().second.mTime = meta->second.mTime; + } + // Remove the mQueue entry regardless, or we risk stalling the + // queue right here if we have a nonpositive interval. mQueue.pop(); - // okay, NOW run - current.mFunc(); } - // queue is empty: stop callbacks - return true; + + // Now reschedule any tasks that need to be rescheduled. + for (const auto& [meta, task] : deferred) + { + auto handle{ mQueue.push(task) }; + // track this new mQueue handle_type + meta->second.mHandle = handle; + } + + // If, after all the twiddling above, our queue ended up empty, + // stop calling every tick. + return mQueue.empty(); } + +} // namespace LL diff --git a/indra/llcommon/llcallbacklist.h b/indra/llcommon/llcallbacklist.h index 3ff1aad04e..f9b15867ef 100644 --- a/indra/llcommon/llcallbacklist.h +++ b/indra/llcommon/llcallbacklist.h @@ -101,11 +101,14 @@ LLCallbackList::handle_t doOnIdleRepeating(bool_func_t callable) } /***************************************************************************** -* LLLater: callbacks at some future time +* LL::Timers: callbacks at some future time *****************************************************************************/ -class LLLater: public LLSingleton<LLLater> +namespace LL { - LLSINGLETON(LLLater); + +class Timers: public LLSingleton<Timers> +{ + LLSINGLETON(Timers); using token_t = U32; @@ -113,11 +116,14 @@ class LLLater: public LLSingleton<LLLater> // a tuple, because we need to define the comparison operator. struct func_at { - nullary_func_t mFunc; + // callback to run when this timer fires + bool_func_t mFunc; + // key to look up metadata in mHandles token_t mToken; + // time at which this timer is supposed to fire LLDate::timestamp mTime; - func_at(const nullary_func_t& func, token_t token, LLDate::timestamp tm): + func_at(const bool_func_t& func, token_t token, LLDate::timestamp tm): mFunc(func), mToken(token), mTime(tm) @@ -146,7 +152,7 @@ public: class handle_t { private: - friend class LLLater; + friend class Timers; token_t token; public: handle_t(token_t token=0): token(token) {} @@ -156,33 +162,33 @@ public: }; // Call a given callable once at specified timestamp. - handle_t doAtTime(nullary_func_t callable, LLDate::timestamp time); + handle_t scheduleAt(nullary_func_t callable, LLDate::timestamp time); // Call a given callable once after specified interval. - handle_t doAfterInterval(nullary_func_t callable, F32 seconds); + handle_t scheduleAfter(nullary_func_t callable, F32 seconds); // Call a given callable every specified number of seconds, until it returns true. - handle_t doPeriodically(bool_func_t callable, F32 seconds); + handle_t scheduleRepeating(bool_func_t callable, F32 seconds); // test whether specified handle is still live bool isRunning(handle_t timer) const; // check remaining time - F32 getRemaining(handle_t timer) const; + F32 timeUntilCall(handle_t timer) const; - // Cancel a future timer set by doAtTime(), doAfterInterval(), doPeriodically(). - // Return true iff the handle corresponds to a live timer. + // Cancel a future timer set by scheduleAt(), scheduleAfter(), scheduleRepeating(). + // Return true if and only if the handle corresponds to a live timer. bool cancel(const handle_t& timer); // If we're canceling a non-const handle_t, also clear it so we need not // cancel again. bool cancel(handle_t& timer); - // Store a handle_t returned by doAtTime(), doAfterInterval() or - // doPeriodically() in a temp_handle_t to cancel() automatically on + // Store a handle_t returned by scheduleAt(), scheduleAfter() or + // scheduleRepeating() in a temp_handle_t to cancel() automatically on // destruction of the temp_handle_t. class temp_handle_t { public: - temp_handle_t() {} + temp_handle_t() = default; temp_handle_t(const handle_t& hdl): mHandle(hdl) {} temp_handle_t(const temp_handle_t&) = delete; temp_handle_t(temp_handle_t&&) = default; @@ -204,11 +210,11 @@ public: // temp_handle_t should be usable wherever handle_t is operator handle_t() const { return mHandle; } // If we're dealing with a non-const temp_handle_t, pass a reference - // to our handle_t member (e.g. to LLLater::cancel()). + // to our handle_t member (e.g. to Timers::cancel()). operator handle_t&() { return mHandle; } // For those in the know, provide a cancel() method of our own that - // avoids LLLater::instance() lookup when mHandle isn't live. + // avoids Timers::instance() lookup when mHandle isn't live. bool cancel() { if (! mHandle) @@ -217,7 +223,7 @@ public: } else { - return LLLater::instance().cancel(mHandle); + return Timers::instance().cancel(mHandle); } } @@ -231,44 +237,64 @@ public: }; private: + handle_t scheduleAtRepeating(bool_func_t callable, LLDate::timestamp time, F32 interval); + LLDate::timestamp now() const { return LLDate::now().secondsSinceEpoch(); } + // wrap a nullary_func_t with a bool_func_t that will only execute once + bool_func_t once(nullary_func_t callable) + { + return [callable] + { + callable(); + return true; + }; + } bool tick(); // NOTE: We don't lock our data members because it doesn't make sense to - // register cross-thread callbacks. If we start wanting to use them on + // register cross-thread callbacks. If we start wanting to use Timers on // threads other than the main thread, it would make more sense to make // our data members thread_local than to lock them. // the heap aka priority queue queue_t mQueue; - // handles we've returned that haven't yet canceled - using HandleMap = std::unordered_map< - token_t, - std::pair<queue_t::handle_type, LLDate::timestamp>>; - HandleMap mHandles; + + // metadata about a given task + struct Metadata + { + // handle to mQueue entry + queue_t::handle_type mHandle; + // time at which this timer is supposed to fire + LLDate::timestamp mTime; + // interval at which this timer is supposed to fire repeatedly + F32 mInterval{ 0 }; + // mFunc is currently running: don't delete this entry + bool mRunning{ false }; + // cancel() was called while mFunc was running: deferred cancel + bool mCancel{ false }; + }; + + using MetaMap = std::unordered_map<token_t, Metadata>; + MetaMap mMeta; token_t mToken{ 0 }; // While mQueue is non-empty, register for regular callbacks. LLCallbackList::temp_handle_t mLive; - - struct Periodic; - - // internal implementation for doAtTime() - HandleMap::iterator doAtTime1(LLDate::timestamp time); - handle_t doAtTime2(nullary_func_t callable, HandleMap::iterator iter); }; +} // namespace LL + /*-------------------- legacy names in global namespace --------------------*/ // Call a given callable once after specified interval. inline -LLLater::handle_t doAfterInterval(nullary_func_t callable, F32 seconds) +LL::Timers::handle_t doAfterInterval(nullary_func_t callable, F32 seconds) { - return LLLater::instance().doAfterInterval(callable, seconds); + return LL::Timers::instance().scheduleAfter(callable, seconds); } // Call a given callable every specified number of seconds, until it returns true. inline -LLLater::handle_t doPeriodically(bool_func_t callable, F32 seconds) +LL::Timers::handle_t doPeriodically(bool_func_t callable, F32 seconds) { - return LLLater::instance().doPeriodically(callable, seconds); + return LL::Timers::instance().scheduleRepeating(callable, seconds); } #endif diff --git a/indra/llcommon/lleventfilter.cpp b/indra/llcommon/lleventfilter.cpp index e72ae7ad33..ad61e9298a 100644 --- a/indra/llcommon/lleventfilter.cpp +++ b/indra/llcommon/lleventfilter.cpp @@ -87,7 +87,7 @@ LLEventTimeout::LLEventTimeout(LLEventPump& source): void LLEventTimeout::actionAfter(F32 seconds, const Action& action) { - mTimer = LLLater::instance().doAfterInterval(action, seconds); + mTimer = LL::Timers::instance().scheduleAfter(action, seconds); } void LLEventTimeout::errorAfter(F32 seconds, const std::string& message) @@ -118,7 +118,7 @@ void LLEventTimeout::cancel() bool LLEventTimeout::running() const { - return LLLater::instance().isRunning(mTimer); + return LL::Timers::instance().isRunning(mTimer); } /***************************************************************************** @@ -277,17 +277,17 @@ F32 LLEventThrottle::getDelay() const void LLEventThrottle::alarmActionAfter(F32 interval, const LLEventTimeout::Action& action) { - mAlarm = LLLater::instance().doAfterInterval(action, interval); + mAlarm = LL::Timers::instance().scheduleAfter(action, interval); } bool LLEventThrottle::alarmRunning() const { - return LLLater::instance().isRunning(mAlarm); + return LL::Timers::instance().isRunning(mAlarm); } void LLEventThrottle::alarmCancel() { - LLLater::instance().cancel(mAlarm); + LL::Timers::instance().cancel(mAlarm); } void LLEventThrottle::timerSet(F32 interval) diff --git a/indra/llcommon/lleventfilter.h b/indra/llcommon/lleventfilter.h index 1deb6f0f4c..b39791c560 100644 --- a/indra/llcommon/lleventfilter.h +++ b/indra/llcommon/lleventfilter.h @@ -191,7 +191,7 @@ public: private: // Use a temp_handle_t so it's canceled on destruction. - LLLater::temp_handle_t mTimer; + LL::Timers::temp_handle_t mTimer; }; /** @@ -300,7 +300,7 @@ private: F32 mInterval; // use this to arrange a deferred flush() call - LLLater::handle_t mAlarm; + LL::Timers::handle_t mAlarm; }; /** diff --git a/indra/llcommon/lleventtimer.cpp b/indra/llcommon/lleventtimer.cpp index 0f8d1e636f..1d2da93683 100644 --- a/indra/llcommon/lleventtimer.cpp +++ b/indra/llcommon/lleventtimer.cpp @@ -49,20 +49,20 @@ LLEventTimer::~LLEventTimer() void LLEventTimer::start() { - mTimer = LLLater::instance().doPeriodically([this]{ return tick(); }, mPeriod); + mTimer = LL::Timers::instance().scheduleRepeating([this]{ return tick(); }, mPeriod); } void LLEventTimer::stop() { - LLLater::instance().cancel(mTimer); + LL::Timers::instance().cancel(mTimer); } bool LLEventTimer::isRunning() { - return LLLater::instance().isRunning(mTimer); + return LL::Timers::instance().isRunning(mTimer); } F32 LLEventTimer::getRemaining() { - return LLLater::instance().getRemaining(mTimer); + return LL::Timers::instance().timeUntilCall(mTimer); } diff --git a/indra/llcommon/lleventtimer.h b/indra/llcommon/lleventtimer.h index 05d8bc038d..a325c704e0 100644 --- a/indra/llcommon/lleventtimer.h +++ b/indra/llcommon/lleventtimer.h @@ -50,7 +50,7 @@ public: virtual bool tick() = 0; protected: - LLLater::temp_handle_t mTimer; + LL::Timers::temp_handle_t mTimer; F32 mPeriod; }; |