W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

Building UC1 (Video Chat) from the WebRTC use case

From: Olivier Thereaux <olivier.thereaux@bbc.co.uk>
Date: Thu, 12 Jan 2012 09:56:05 +0000
Message-ID: <4F0EAE35.6040707@bbc.co.uk>
To: public-audio@w3.org
Hi group,

I'd like your thoughts on how we can build our use case currently 
identified as UC1 (Video Chat).

We can take as a basis the text used in WEBRTC's own use cases and 
requirements document:
    Two or more users have loaded a video communication web application
    into their browsers, provided by the same service provider, and
    logged into the service it provides.  The web service publishes
    information about user login status by pushing updates to the web
    application in the browsers.  When one online user selects a peer
    online user, a 1-1 video communication session between the browsers
    of the two peers is initiated.  The invited user might accept or
    reject the session.

    During session establishment a self-view is displayed, and once the
    session has been established the video sent from the remote peer is
    displayed in addition to the self-view.  During the session, each
    user can select to remove and re-insert the self-view as often as
    desired.  Each user can also change the sizes of his/her two video
    displays during the session.  Each user can also pause sending of
    media (audio, video, or both) and mute incoming media
]] -- 

For the sake of simplicity, parts of that prose can be omitted (e.g how 
the invited user might accept or reject the session). I was wondering, 
however, what we may want to add to it. For example:

* Each user could have access to an interface to make each of the other 
participants' sound more/less loud.

* The service could offer user-triggered settings (EQ, filtering) for 
voice enhancement

* The service could offer an option to distort each voice for fun, or to 
protect one participant's privacy (pitch, speed)

* The service could offer an option to slow down / speed up one of the 
participant's voice.

Any of these feel reasonably in scope for our work? Anything else? We 
don't have to decide yet how the API(s) would make these possible, but 
we need to decide early one whether it should be a success criteria to 
make them possible at all.


Olivier Thereaux
BBC Internet Research & Future Services

Received on Thursday, 12 January 2012 09:59:00 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:49:57 UTC