Re: Building UC1 (Video Chat) from the WebRTC use case

Le 12/01/2012 12:45, Anthony Bowyer-Lowe a écrit :
> On 12 January 2012 09:56, Olivier Thereaux<olivier.thereaux@bbc.co.uk>wrote:
>
>> * Each user could have access to an interface to make each of the other
>> participants' sound more/less loud.
>> * The service could offer user-triggered settings (EQ, filtering) for
>> voice enhancement
>>
>
> These are definitely desirable in order to facilitate communication between
> people with hearing difficulties (ageing population), in imperfect
> listening environments, or to compensate for poor transmission environments
> (filtering out rumble from passing traffic, etc).


+1

>
> * The service could offer an option to slow down / speed up one of the
>> participant's voice.
>>
>
> As Robert mentioned, I'm not sure this makes sense.
>
> However, somewhat related, I loosely feel there *could* be a need to alter
> relative time delays of each participants audio stream to realign the audio
> when some participants have longer transmission latencies than others. I
> have seen situations where a band has broadcast a jam via a Google hangout
> from a single location with multiple webcams where each cam stream was way
> out of sync but I'm really not sure that it is a particularly common
> need. Thoughts?

Isn't that option to slow down / speed up one of the
participant's voice also an accessibility benefit?

I can understand spanish, but not at the speed rate of a Madrileños 
speaking. Slowing down his voice would help me better understand him.
On the contrary Swiss are told to speak a slow french. I may accelerate 
this Genevois guy ;-)

More seriously, I know blind people who listen text to speech at a very 
surprising speed. But this may be a bit difficult to deal with this slow 
down / speed up option during a teleconference and stay in scynch.

>
>
> Anthony.
>

Received on Friday, 13 January 2012 15:15:18 UTC