Re: Building UC1 (Video Chat) from the WebRTC use case

On 12 January 2012 09:56, Olivier Thereaux <olivier.thereaux@bbc.co.uk>wrote:

> * Each user could have access to an interface to make each of the other
> participants' sound more/less loud.
> * The service could offer user-triggered settings (EQ, filtering) for
> voice enhancement
>

These are definitely desirable in order to facilitate communication between
people with hearing difficulties (ageing population), in imperfect
listening environments, or to compensate for poor transmission environments
(filtering out rumble from passing traffic, etc).

* The service could offer an option to slow down / speed up one of the
> participant's voice.
>

As Robert mentioned, I'm not sure this makes sense.

However, somewhat related, I loosely feel there *could* be a need to alter
relative time delays of each participants audio stream to realign the audio
when some participants have longer transmission latencies than others. I
have seen situations where a band has broadcast a jam via a Google hangout
from a single location with multiple webcams where each cam stream was way
out of sync but I'm really not sure that it is a particularly common
need. Thoughts?


Anthony.

Received on Thursday, 12 January 2012 23:04:20 UTC