(Was: Building UC1 (Video Chat) from the WebRTC use case)

Thierry,

On 13/01/2012 15:14, Thierry MICHEL wrote:
> Le 12/01/2012 12:45, Anthony Bowyer-Lowe a écrit :
>> However, somewhat related, I loosely feel there *could* be a need
>> to alter relative time delays of each participants audio stream to
>> realign the audio when some participants have longer transmission
>> latencies than others. I have seen situations where a band has
>> broadcast a jam via a Google hangout from a single location with
>> multiple webcams where each cam stream was way out of sync but I'm
>> really not sure that it is a particularly common need. Thoughts?
>
> Isn't that option to slow down / speed up one of the
> participant's voice also an accessibility benefit?
>
> I can understand spanish, but not at the speed rate of a Madrileños
> speaking. Slowing down his voice would help me better understand him.
> On the contrary Swiss are told to speak a slow french. I may accelerate
> this Genevois guy ;-)
>
> More seriously, I know blind people who listen text to speech at a very
> surprising speed. But this may be a bit difficult to deal with this slow
> down / speed up option during a teleconference and stay in scynch.

Earlier comments on this thread mentioned that slow down / speed up may 
be inconvenient in a live teleconference setup, but there seem to be a 
case for such a feature in other contexts (playback of spoken material, 
mostly, be it recorded or synthesised).

Would you be interested in drafting a relevant use case scenario?

Thanks,
Olivier

Received on Monday, 16 January 2012 12:19:14 UTC