Age | Commit message (Collapse) | Author |
|
WebRTC logs now pass out of the webrtc library into a logging sink,
which converts them into SecondLife.log compatable logging calls.
This includes fatal errors and asserts, which are now logged into
SecondLife.log, and should be available in the crash logger.
|
|
|
|
Previously, there were two places audio gain could be controlled:
- the device manager
- the audio track
The device manager audio gain control sets the system gain for all applications,
not just the webrtc application.
The audio track gain happens well after the audio processing where we want it to happen.
So, gain control was added to the existing custom audio processor, which previously only
handled calculating and retrieving the audio levels.
After these changes, the microphone gain slider does impact the audio volume heard by peers.
|
|
|
|
|
|
other jobs might be using it.
|
|
When creating a new connection, the viewer builds a data channel interface.
It then gets a new one, which is a proxy. The viewer uses the new one,
and therefore must unregister the callbacks from the old one.
Also, update the position data before sending it after the join is sent.
|
|
|
|
|
|
When parcel voice permissions and region/parcel-only voice
settings change, a callback will be made to the viewer with
new voice credential information. For webrtc, this means
either just the uuid of the voice channel, or nothing if
voice is disabled.
This change looks at that callback and the channel id,
and sets the appropriate flags on the parcel/region as needed
which will cause voice to be renegotiated.
Also, there was a race condition if the voice connect attempt
was made before caps were retrieved, which would have resulted
in full renegotiate attempts. Now, just wait until the cap
comes in and continue.
|
|
|
|
The simulator will send a chatterbox notification that
voice is no longer in use for a given channel, and
the viewer should take that as a case where the peer
does not want voice, hence it's a decline.
|
|
|
|
Windows and Mac/Linux behave slightly differently with respect
to Default devices, in that mac/linux (I think) simply assumes
the device at index 0 is the default one, and windows has a
separate API for enabling the default device.
|
|
* sampling rate was set to 8khz for audio processing, which was
causing a 'bands' mismatch with the echo cancler.
* Some funnybusiness with lambdas and captures and such was causing
a heap crash with respect to function parameters.
|
|
into roxie/webrtc-voice
|
|
|
|
Plumb audio settings through from webrtc to the sound preferences
UI (still needs some tweaking, of course.)
Also, choose stun servers based on grid. Ultimately, the stun
stun servers will be passed up via login or something.
|
|
Also:
* Fix a few crashes.
* Only send position data when it changes.
|
|
|
|
|
|
|
|
|
|
This refactor fixed a few bugs. There is an annoying 'click' when
changing devices, however. This will be addressed in the future.
|
|
|
|
|
|
reason
|
|
|
|
|
|
|
|
|
|
|
|
|
|
will happen after AGC
|
|
|
|
|
|
Also, start/stop recording depending on whether WebRTC has negotiated.
|
|
|
|
|
|
Better handle starting up and shutting down WebRTC connections
simultaneously.
|
|
|
|
|
|
Muting using the device module microphone mute was muting other
applications, speakers, and so on. Instead, we mute by enabling/disabling
the input and output streams.
|
|
|
|
for all applications. Instead, modify the volume on the various streams.
|
|
This is useful for cross-region voice, quick voice switching, etc.
|
|
|
|
|
|
This commit includes code to allow the llwebrtc.dll/dylib to allow
multiple connections at once.
|
|
|