W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

Re: Building UC1 (Video Chat) from the WebRTC use case

From: Anthony Bowyer-Lowe <anthony@lowbroweye.com>
Date: Thu, 12 Jan 2012 11:45:00 +0000
Message-ID: <CAMCSOPUh5+oP46MyQXbLNLrqXw9HfK1P7rw3BwUus+URpcygFA@mail.gmail.com>
To: Olivier Thereaux <olivier.thereaux@bbc.co.uk>
Cc: public-audio@w3.org
On 12 January 2012 09:56, Olivier Thereaux <olivier.thereaux@bbc.co.uk>wrote:

> * Each user could have access to an interface to make each of the other
> participants' sound more/less loud.
> * The service could offer user-triggered settings (EQ, filtering) for
> voice enhancement
>

These are definitely desirable in order to facilitate communication between
people with hearing difficulties (ageing population), in imperfect
listening environments, or to compensate for poor transmission environments
(filtering out rumble from passing traffic, etc).

* The service could offer an option to slow down / speed up one of the
> participant's voice.
>

As Robert mentioned, I'm not sure this makes sense.

However, somewhat related, I loosely feel there *could* be a need to alter
relative time delays of each participants audio stream to realign the audio
when some participants have longer transmission latencies than others. I
have seen situations where a band has broadcast a jam via a Google hangout
from a single location with multiple webcams where each cam stream was way
out of sync but I'm really not sure that it is a particularly common
need. Thoughts?


Anthony.
Received on Thursday, 12 January 2012 23:04:20 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 12 January 2012 23:04:20 GMT