- From: Anant Narayanan <anant@mozilla.com>
- Date: Thu, 06 Oct 2011 12:55:12 -0700
- To: Adam Bergkvist <adam.bergkvist@ericsson.com>
- CC: "Tommy Widenflycht (ᛏᚮᛘᛘᚤ)" <tommyw@google.com>, "public-webrtc@w3.org" <public-webrtc@w3.org>
Hi Tommy, Adam,
On 10/5/2011 6:51 AM, Adam Bergkvist wrote:
> I agree with Tommy. Right now, once you have a MediaStream you can start
> using it. If getUserMedia returns a stream directly, it would have to be
> empty (no tracks), and tracks would have to be added later. I think it
> would simplify things (e.g. MediaStream playback and sending with
> PeerConnection) if a MediaStream is immutable with regards to its track
> list.
I think it is unrealistic to assume that MediaStreams are immutable. Web
developers are already very familiar with the concept that an
XMLHttpRequest is a living object and that changes will happen to it
continuously which they must monitor and respond to.
A MediaStream is no different, and there are multiple circumstances
under which a stream may change, some of which are:
- Network disruption
- Physical disconnection of webcam/microphone
- Software mute at OS level of microphone
Since the web application must be able to respond to any of these (and
other) changes to the MediaStream, we will most likely standardize on
several DOM events on the MediaStream object anyway. Taking it one step
further and adding an event for track addition and removal feels very
"webby" to me :)
That being said, I'd like to hear about what you think the advantages of
having an immutable track list are; and if you think user-agents are
able to guarantee that?
> On 2011-10-05 08:59, Tommy Widenflycht (ᛏᚮᛘᛘᚤ) wrote:
>> Yeah, I understood that during the office hour call. Dunno, your
>> suggestion seems less elegant and clear but that might just be because
>> I am quite new to the JS world. Can you list some use cases where your
>> suggestion will really make a difference?
One of my primary goals is to make the getUserMedia API as close to
other Web APIs as possible. In the simplest case of the web developer
who wants both a video and audio stream, the call would look like:
var stream = navigator.getUserMedia();
stream.addEventListener("readyState", streamIsReady);
function streamIsReady() { ... // note that errors can be handled here
too, if we choose to define readyState to be broad }
In the current spec, this looks like:
var stream = navigator.getUserMedia("audio,video", streamIsReady,
streamError);
function streamIsReady() { ... }
function streamError() { ... }
Two reasons why I prefer the former over the latter:
- Events > Callbacks. Events propagate, can be chained (ala jQuery), and
there can be multiple listeners for the same event which is useful in
some cases. That is not true of explicit callbacks.
- An event like "readyState" is broad enough (just like the event of the
same name in XHR) to cover many cases, so in the typical case the
developer has to attach only one event listener. Of course we can choose
to add other events to provide even more flexibility, and the more
sophisticated web-apps will have multiple event listeners (which is fine
IMO).
I feel that since developers will be attaching event listeners to a
MediaStream anyway, asking them to do right after they get one from
getUserMedia is not necessarily a bad thing. This is a paradigm we
should try and encourage.
I'm still open to arguments and can be convinced otherwise! But this
makes it feel very close to some of the other async Web APIs out there
and feels intuitive as a JS developer to me :)
Thanks,
-Anant
Received on Thursday, 6 October 2011 19:55:50 UTC