W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

(Was: Building UC1 (Video Chat) from the WebRTC use case)

From: Olivier Thereaux <olivier.thereaux@bbc.co.uk>
Date: Mon, 16 Jan 2012 12:18:10 +0000
Message-ID: <4F141582.1020909@bbc.co.uk>
To: tmichel@w3.org
CC: Anthony Bowyer-Lowe <anthony@lowbroweye.com>, public-audio@w3.org

On 13/01/2012 15:14, Thierry MICHEL wrote:
> Le 12/01/2012 12:45, Anthony Bowyer-Lowe a écrit :
>> However, somewhat related, I loosely feel there *could* be a need
>> to alter relative time delays of each participants audio stream to
>> realign the audio when some participants have longer transmission
>> latencies than others. I have seen situations where a band has
>> broadcast a jam via a Google hangout from a single location with
>> multiple webcams where each cam stream was way out of sync but I'm
>> really not sure that it is a particularly common need. Thoughts?
> Isn't that option to slow down / speed up one of the
> participant's voice also an accessibility benefit?
> I can understand spanish, but not at the speed rate of a Madrileños
> speaking. Slowing down his voice would help me better understand him.
> On the contrary Swiss are told to speak a slow french. I may accelerate
> this Genevois guy ;-)
> More seriously, I know blind people who listen text to speech at a very
> surprising speed. But this may be a bit difficult to deal with this slow
> down / speed up option during a teleconference and stay in scynch.

Earlier comments on this thread mentioned that slow down / speed up may 
be inconvenient in a live teleconference setup, but there seem to be a 
case for such a feature in other contexts (playback of spoken material, 
mostly, be it recorded or synthesised).

Would you be interested in drafting a relevant use case scenario?


Received on Monday, 16 January 2012 12:19:14 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:49:57 UTC