W3C home > Mailing lists > Public > public-webrtc@w3.org > September 2013

Handling simulcast

From: Martin Thomson <martin.thomson@gmail.com>
Date: Thu, 5 Sep 2013 10:31:41 -0700
Message-ID: <CABkgnnWAFNEFt9g6inoT+QWdYDQeVgVpv4M4mLyFP39RbeycQQ@mail.gmail.com>
To: "public-webrtc@w3.org" <public-webrtc@w3.org>
There was a question about how to do simulcast on the call.  Here's
how it might be possible to do simulcast without additional API
surface.

1. Acquire original stream containing one video track.
2. Clone the track and rescale it.
3. Assemble a new stream containing the original and the rescaled track.
4. Send the stream.
5. At the receiver, play the video stream.

That's the user part, now for the under-the-covers stuff:

I know we discussed the rendering of multiple video tracks in the
past, but it's not possible to read the following documents and reach
any sensible conclusions:
http://dev.w3.org/2011/webrtc/editor/getusermedia.html
http://www.w3.org/TR/html5/embedded-content-0.html#concept-media-load-resource

What needs to happen in this case is to ensure that the two video
tracks are folded together with the higher "quality" version being
displayed and the lower "quality" version being used to fill in any
gaps that might appear in the higher "quality" one.

That depends on the <video> element being able to identify the tracks
as being equivalent, and possibly being able to identify which is the
higher quality.  This is where something like the srcname proposal
could be useful
(http://tools.ietf.org/html/draft-westerlund-avtext-rtcp-sdes-srcname-02).

The only missing piece is exposing metadata on tracks such that this
behaviour is discoverable.  Adding an attribute on tracks (srcname
perhaps, arbaon), could provide a hook for triggering the folding
behaviour I'm talking about.
Received on Thursday, 5 September 2013 17:32:09 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:36 UTC