| Age | Commit message (Collapse) | Author |
|
unplugging/re-plugging devices (#4593)
* [WebRTC] Rework device handling sequence so that we can handle unplugging/re-plugging devices
The device handling was not processing device updates in the proper sequence as
things like AEC use both input and output devices. Devices like headsets are both
so unplugging them resulted in various mute conditions and sometimes even a crash.
Now, we update both capture and render devices at once in the proper sequence.
Test Guidance:
* Bring two users in the same place in webrtc regions.
* The 'listening' one should have a headset or something set oas 'Default'
* Press 'talk' on one, and verify the other can hear.
* Unplug the headset from the listening one.
* Validate that audio changes from the headset to the speakers.
* Plug the headset back in.
* Validate that audio changes from speakers to headset.
* Do the same type of test with the headset viewer talking.
* The microphone used should switch from the headset to the computer (it should have one)
Do other various device tests, such as setting devices explicitly, messing with the device selector, etc.
* Fix race condition when multiple change device requests might come in at once
* Update to m137
The primary feature of this commit is to update libwebrtc from m114
to m137. This is needed to make webrtc buildable, as m114 is not buildable
by the current toolset.
m137 had some changes to the API, which required renaming or changing namespace
of some of the calls.
Additionally, this PR moves from a callback mechanism for gathering the energy
levels for tuning to a wrapper AudioDeviceModule, which gives us more control
over the audio stream.
Finally, the new m137-based webrtc has been updated to allow for 192khz audio
streams.
* Properly pass the observer setting into the inner audio device module
* Update to m137 and get rid of some noise
This change updates to m137 from m114, which required a few API changes.
Additionally, this fixes the hiss that happens shortly after someone unmutes: https://github.com/secondlife/server/issues/2094
There was also an issue with a slight amount of repeated after unmuting if there was audio right before unmuting. This is because
the audio processing and buffering still had audio from the previous speaking session. Now, we inject nearly a half second of
silence into the audio buffers/processor after unmuting to flush things.
* Install nsis on windows
* Use the newer digital AGC pipeline
m137 improved the AGC pipeline and the existing analog style is going away
so move to the new digital pipeline.
Also, some tweaking for audio levels so that we don't see inworld bars when tuning,
so one's own bars seem a reasonable size, etc.
* Install NSIS during windows sisgning and package build step
* Try pinning the packaging to windows 2022 to deal with missing nsis
* Adjust gain calculation and audio level calculations for tuning and peer connections
* Update with mac universal webrtc build
* Tuning of voice indicators for both tuning mode and inworld for self.
* Redo device deployment to handle cases where multiple deploy requests pile up
Also, mute when leaving webrtc-enabled regions or parcels,
and unmute when voice comes back.
* pre commit issue
|
|
regions.
Muting was a bit random in the code, so it's now been straightened out and should
prevent echo.
Also, code was added to not attempt connection to non-webrtc regions in the webrtc code.
|
|
1. set_enabled(false) failed to apply, force set it to trigger observers
and remove the icon
2. Don't set audio devices if voice was disabled
|
|
|
|
When transitioning from mic-on hands-free mode to mic off,
it's expected that the audio stream would return to stereo.
Inproper logic in the mac device code in webrtc was preventing
that.
|
|
The microphone issue was causing a short moment of sound, and was
causing bluetooth headsets to switch to hands-free/one channel mode
which is disruptive.
Also, update webrtc to deal with issue where airpods were garbled
after coming out of hands-free mode.
|
|
Fixes prevent attempting to start playout/recording before the devices
are set up, to prevent restarting playout/recording, to prevent
attempts to stop when not playing/recording, and so on...
This should address the case where audio device changes can cause
an assert. It should also address the case where audio was unnecessarily played
or transmitted when connecting.
And, when voice is disabled, the audio devices are not set up to play/record
so there should be no disruption of bluetooth music from other apps.
|
|
WebRTC logs now pass out of the webrtc library into a logging sink,
which converts them into SecondLife.log compatable logging calls.
This includes fatal errors and asserts, which are now logged into
SecondLife.log, and should be available in the crash logger.
|
|
|
|
Previously, there were two places audio gain could be controlled:
- the device manager
- the audio track
The device manager audio gain control sets the system gain for all applications,
not just the webrtc application.
The audio track gain happens well after the audio processing where we want it to happen.
So, gain control was added to the existing custom audio processor, which previously only
handled calculating and retrieving the audio levels.
After these changes, the microphone gain slider does impact the audio volume heard by peers.
|
|
|
|
|
|
other jobs might be using it.
|
|
When creating a new connection, the viewer builds a data channel interface.
It then gets a new one, which is a proxy. The viewer uses the new one,
and therefore must unregister the callbacks from the old one.
Also, update the position data before sending it after the join is sent.
|
|
|
|
|
|
When parcel voice permissions and region/parcel-only voice
settings change, a callback will be made to the viewer with
new voice credential information. For webrtc, this means
either just the uuid of the voice channel, or nothing if
voice is disabled.
This change looks at that callback and the channel id,
and sets the appropriate flags on the parcel/region as needed
which will cause voice to be renegotiated.
Also, there was a race condition if the voice connect attempt
was made before caps were retrieved, which would have resulted
in full renegotiate attempts. Now, just wait until the cap
comes in and continue.
|
|
|
|
The simulator will send a chatterbox notification that
voice is no longer in use for a given channel, and
the viewer should take that as a case where the peer
does not want voice, hence it's a decline.
|
|
|
|
Windows and Mac/Linux behave slightly differently with respect
to Default devices, in that mac/linux (I think) simply assumes
the device at index 0 is the default one, and windows has a
separate API for enabling the default device.
|
|
* sampling rate was set to 8khz for audio processing, which was
causing a 'bands' mismatch with the echo cancler.
* Some funnybusiness with lambdas and captures and such was causing
a heap crash with respect to function parameters.
|
|
into roxie/webrtc-voice
|
|
|
|
Plumb audio settings through from webrtc to the sound preferences
UI (still needs some tweaking, of course.)
Also, choose stun servers based on grid. Ultimately, the stun
stun servers will be passed up via login or something.
|
|
Also:
* Fix a few crashes.
* Only send position data when it changes.
|
|
|
|
|
|
|
|
|
|
This refactor fixed a few bugs. There is an annoying 'click' when
changing devices, however. This will be addressed in the future.
|
|
|
|
|
|
reason
|
|
|
|
|
|
|
|
|
|
|
|
|
|
will happen after AGC
|
|
|
|
|
|
Also, start/stop recording depending on whether WebRTC has negotiated.
|
|
|
|
|
|
Better handle starting up and shutting down WebRTC connections
simultaneously.
|
|
|
|
|
|
Muting using the device module microphone mute was muting other
applications, speakers, and so on. Instead, we mute by enabling/disabling
the input and output streams.
|