- From: Gustavo Garcia Bernardo <ggb@tid.es>
- Date: Sat, 14 Sep 2013 01:34:58 +0000
- To: "Cullen Jennings (fluffy)" <fluffy@cisco.com>, Martin Thomson <martin.thomson@gmail.com>
- Cc: "public-webrtc@w3.org" <public-webrtc@w3.org>
That would mean that you cannot implement a server supporting simulcast without supporting layered encoding because you don't know what is the browser going to end up sending? I prefer to use more explicit constraints like proposed in [1]: 1920x1080 30fps 3Mbps 1layer, 1280x720 30 fps 1Mbps 3layers, 525x700 30fps 256kbps 1layer I'm probably fine with the javascript approach too although I'm not sure how would you specify bitrates in that case. Can somebody write some pseudocode to compare it with the constraints approach? G. [1] http://tools.ietf.org/html/draft-garcia-simulcast-and-layered-video-webrtc-00 ________________________________________ From: Cullen Jennings (fluffy) [fluffy@cisco.com] Sent: Friday, September 06, 2013 7:38 PM To: Martin Thomson Cc: public-webrtc@w3.org Subject: Re: Handling simulcast As an alternative proposal, how about a constraint that has roughly this semantic information: 1920x1080 30fps 3Mbps, 1280x720 30 fps 1Mbps, 525x700 30fps 256kbps To specify 3 layers with max bitrates as specified. The point Justin has made before is the solution below won't work for layered codecs and it would be nice to have the JS code be the same and let the browser take care of negotiating if we use simulcast or layered codecs. I'm just tossing ideas out there but it seems like whatever we do, be nice if it was reasonably easy for JS programmmers to use. I could probably live with something along the lines of what you suggested bellow. On Sep 5, 2013, at 11:31 AM, Martin Thomson <martin.thomson@gmail.com> wrote: > There was a question about how to do simulcast on the call. Here's > how it might be possible to do simulcast without additional API > surface. > > 1. Acquire original stream containing one video track. > 2. Clone the track and rescale it. > 3. Assemble a new stream containing the original and the rescaled track. > 4. Send the stream. > 5. At the receiver, play the video stream. > > That's the user part, now for the under-the-covers stuff: > > I know we discussed the rendering of multiple video tracks in the > past, but it's not possible to read the following documents and reach > any sensible conclusions: > http://dev.w3.org/2011/webrtc/editor/getusermedia.html > http://www.w3.org/TR/html5/embedded-content-0.html#concept-media-load-resource > > What needs to happen in this case is to ensure that the two video > tracks are folded together with the higher "quality" version being > displayed and the lower "quality" version being used to fill in any > gaps that might appear in the higher "quality" one. > > That depends on the <video> element being able to identify the tracks > as being equivalent, and possibly being able to identify which is the > higher quality. This is where something like the srcname proposal > could be useful > (http://tools.ietf.org/html/draft-westerlund-avtext-rtcp-sdes-srcname-02). > > The only missing piece is exposing metadata on tracks such that this > behaviour is discoverable. Adding an attribute on tracks (srcname > perhaps, arbaon), could provide a hook for triggering the folding > behaviour I'm talking about. > ________________________________ Este mensaje se dirige exclusivamente a su destinatario. Puede consultar nuestra política de envío y recepción de correo electrónico en el enlace situado más abajo. This message is intended exclusively for its addressee. We only send and receive email on the basis of the terms set out at: http://www.tid.es/ES/PAGINAS/disclaimer.aspx
Received on Saturday, 14 September 2013 01:35:28 UTC