W3C home > Mailing lists > Public > public-webrtc@w3.org > August 2015

Re: Al PR for adding various fields to RtpParameters and RtpEncodingParameters

From: Randell Jesup <randell-ietf@jesup.org>
Date: Tue, 25 Aug 2015 22:04:55 -0400
To: public-webrtc@w3.org
Message-ID: <55DD1EC7.4010306@jesup.org>
On 8/25/2015 8:56 PM, Justin Uberti wrote:
> For ORTC, we came to the conclusion that any adjustments to the input 
> to the RtpSender (e.g. framerate clamping) ought to be performed on 
> the input MediaStreamTrack.

Are we then adding a bunch of (duplicative) controls on MediaStreamTrack 
for maxFramerate, etc?   Do we have a PR for that, here or in the 
MediaCapture TF?  Also, I don't necessarily want to down-resolution or 
down-framerate the local instance of the video just because I want to 
send a thumbnail (double so if I might be recording a copy locally); are 
you proposing we handle those cases by asking a second time for access, 
and running two copies of everything through the internals?   Or cloning 
the track so that we can modify one clone and not affect the other?

We can have a set of controls on MediaStreamTracks, though I generally 
would prefer for Tracks to be largely containers that pass on controls 
to the underlying source node, not processing nodes. Tracks should be 
the data conduits, hooking sources, sinks and processing nodes 
together.  Processing and control should occur at nodes (sources 
(gUM/applyConstraints, RTPReceiver, canvas, etc), sinks (RTPSender, 
MediaRecorder, media elements, etc) and processing nodes (largely none 
today, but ctai's proposals will give us those as well).   
applyConstraints/etc cause the track to affect the source node attached 
to it.  We are stuck with some processing behavior tied to tracks, but 
not much other than considering gUM MediaStreamTracks to effectively be 
a subclass (GetUserMediaStreamTracks) with applyConstraints (just enable 
and a few other bits).

(It's unfortunate in hindsight that we didn't define gUM as providing 
both a MediaSource object (which you can frob; the architectural 
equivalent to RTPSenders/Receivers) and MediaStreamTrack(s), which would 
be dumb conduits.  C'est la vie)

> The scale attributes are just for scalable encoding/simulcast, so that 
> multiple encodings can be created from a single input MST.

There is a very clear usecase outside of simulcast we should support: I 
have an HD stream.  I want to send VGA (or smaller). Especially for the 
case where it's not a gUM source, or I want to (per above) display it 
locally in HD or record it.


-- 
Randell Jesup -- rjesup a t mozilla d o t com
Please please please don't email randell-ietf@jesup.org!  Way too much spam
Received on Wednesday, 26 August 2015 02:06:47 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:45 UTC