Re: Revised Constraints modification API proposal

On 8/23/2012 6:06 AM, Harald Alvestrand wrote:
> On 08/22/2012 07:48 PM, Travis Leithead wrote:
>
>>> What I think is happening here is that we're using
>>> onaddtrack/onremovetrack to signal that capabilities have been applied.
>>> Perhaps that would be better with a callback argument in 
>>> applySettings or via
>>> an event listener on each individual LocalMediaStreamTrack.
>> This sounds like a good idea. I don't have a feel for what the WG 
>> feels is a
>> better design, but I'm open to pursuing either course. My design assumes
>> that applications are not well prepared for dynamic changes in track 
>> contents/
>> settings. My design also forces web developers to re-hookup tracks to 
>> downstream
>> MediaStream objects in the event of a setting change, which is somewhat
>> undesirable.
> Using add/remove track to signal track property change means that we 
> have no stable reference for "this camera, even though it changed 
> properties". This seems like a bad design to me.

It's even worse if the track has been extracted to create derived 
MediaStreams, and even worse if they've gone through transforming elements.

>
> The addition of a new resource and the modification of properties on a 
> resource seems to me to be different events.
>>
>>>> The "applySettings" API (I renamed it to sound more friendly), acts on
>>>> all the local media tracks in the list by default, or targets only a
>>>> specific track if one is indicated in the 2nd parameter.
>>> Again, what would happen if settings are conflicting between two 
>>> devices? It
>>> seems better if settings can be queried and applied only per track. 
>>> Applying
>>> settings to multiple tracks via a single call feels like it could be 
>>> an optimisation
>>> rather than a strictly necessary addition.
>> As mentioned previously, this is a good point, and can be simplified by
>> limiting the API to only work against a single track.
> If a track is mutable, the natural place to have the mutation API is 
> on the track.

Agreed, and that's really where the changes occur.   This also brings up 
how changes get mirrored back to the source, since neither the 
MediaStream nor even the MediaTracks available to the consumer of the 
MediaStream are necessarily the same.  Tracks can be used to create new, 
derived MediaStreams; they can go through transformations (such as 
WebAudio or MediaStream Processing-style APIs).  So you have to think 
about how a transform on a Track (or Stream) ends up back at the 
capturing (or playback, or generating (Canvas, PeerConnection)) 
device.    A PeerConnection getting one of these requests can (if it 
wishes and knows how) transfer this request to the far-end source, such 
as a resolution change request.

This is what I was suggesting in previous Telcos about "events" bubbling 
up the MediaStream/Track graph.

I should clarify: these do not need to be DOM events; so "events" may be 
an overloaded word (though if DOM events happen to work out, that's 
fine, but I suspect they're not appropriate as-is.  We can re-use a fair 
bit of DOM Events API design, however).  And what we would need is  a 
generic mechanism for passing these modification events up the chain to 
source; we can define an initial set of requests (such as from the 
Camera Control list), but it should be easily extensible via some type 
of standards registry.

-- 
Randell Jesup
randell-ietf@jesup.org

Received on Thursday, 23 August 2012 14:14:36 UTC