W3C home > Mailing lists > Public > public-media-capture@w3.org > June 2012

Re: [rtcweb] Mute implementations (Re: More Comments ondraft-ietf-rtcweb-use-cases-and-requirements-07)

From: Randell Jesup <randell-ietf@jesup.org>
Date: Sun, 17 Jun 2012 12:02:20 -0400
Message-ID: <4FDDFF8C.6060902@jesup.org>
To: rtcweb@ietf.org, "public-media-capture@w3.org" <public-media-capture@w3.org>
CCing Media-capture as much of this touches on issues there.  For 
context, you can read the recent rtcweb archives.  The general topic is 
"Mute" (and Hold) and both replacing video with an image or pre-recorded 
video clip (repeating if short), and related issue of replacing a 
front-camera video with a rear-camera video without operations that take 
significant time (renegotiation of the peerconnection).  Basically 
immediate source switches.

On 6/17/2012 10:17 AM, Jim Barnett wrote:
> I still think that there is a significant difference between
> files/images and live sources.  A live source provides a lot of
> information about my environment – what’s going on, whether I have
> shaved, etc.  The file or image source that we use for hold is designed
> to provide _/no/_ information about my environment.  In a user’s mind,
> showing “please wait” or a  images of a waterfall is very different from
> showing anything _/live/_ from his environment, and we may want to
> reflect that difference in the requirements.  For example, does the user
> have to give explicit permission to show the “Please wait” source when
> the call is put on hold, or is that configured as a default in the
> browser?  (The latter, I’d  think, which makes it different from the
> front and rear cameras.)

Selecting an image or video from the user's filesystem would require the 
user's direct permission (ala <input type="file">).  Sourcing a 
mediastream from a web source (i.e. part of the app) or from a 
app-generated canvas (with "please wait" in it) would not.

If the app requests both front and rear camera streams and gets them 
(two getUserMedia() calls), then permission is already given and the app 
should be able to do anything with those streams.

If the app hasn't requested the rear camera yet, requesting would have 
to be done first and would engender a request to the user to permit access.

Once the app has both streams, it should be able to switch between the 
streams on demand (front/back UI button in the app), with no delay, and 
certainly no renegotiation of the call.

If there's a reason it could renegotiate after switching.  For (a 
contrived) example, if it had negotiated a call with a low maximum 
resolution, it could switch (and downscale the new hires stream) until a 
renegotiation allowed sending the full resolution.  That's a contrived 
example, but there might be some real-world examples, and that's up to 
the app.

The same things apply to switching between video (live or recorded) and 
still images.

These speak to how MediaStream work and can be processed, and how a 
track in a MediaStream can be used to represent different things, and 
how that's different than a MediaStream with two tracks.  Some of this 
impacts the interface and assumptions on MediaStreams and tracks in 
PeerConnection.  These are issues because PeerConnection, <video> (and 
others) may play a particular track of a MediaStream.

I'll further respond to Stefan's detailed email.

Randell Jesup
Received on Sunday, 17 June 2012 16:03:16 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:24:35 UTC