RE: First draft of "MediaStream Capture Scenarios" now available

(-)public-device-api

>-----Original Message-----
>From: Harald Alvestrand [mailto:harald@alvestrand.no]
>Sent: Monday, December 05, 2011 10:44 PM
>On 12/06/2011 02:45 AM, Travis Leithead wrote:
>> This draft is a combination of scenarios mingled with some commentary,
>issues, and points of discussion.

>Actually, I'm a bit disappointed with the scenarios part of this; there
>are a lot of cases (rewind, take snapshot and so on) that I'd recognize
>as scenarios / use cases, but they are not called out to a degree where
>I can recognize them as specific scenarios; they are rather buried
>within the commentary.

Don't be too disappointed--it's a first draft :-)

>Would it be possible to pull out the specific scenarios that you would
>like us to support, independent from the possible / current / proposed
>implementation descriptions?

Sure thing. I'll see about lifting them out into an independent top-level section 
(which I think would make them more clear). I can rename the current 
"scenarios" section as something else (possibly "Commentary")?

>Some detailed commentary:
>
>- header: This should be listed as a WEBRTC/DAP task force item, not a
>DAP item.

I'm happy to change this, but don't know how. SOTD section is auto-generated
from ReSpec. Any ReSpec experts out there that can provide pointers?


>- intro: the use of the word "recording" should be avoided for the
>WebRTC scenarios, since the whole subject of recording sessions is still
>open in WEBRTC. "Capture" is the most common word, I think.

OK

>- 3.1 stream initialization: The result of initialization will have to
>be available as a MediaStream. As long as the WebRTC API is totally
>dependent on the MediaStream concept, this is not a question. If you
>want to suggest other forms of the available capture, those may be
>alternatives, or be something that can be converted into a MediaStream,
>but current text is too weak on this point.

I'm actually happy to hear that. Since the MediaStream interface wasn't 
included in the first draft of the getusermedia spec, it left me wondering 
if there was some other planned integration point. I'd prefer to build on the 
MediaStream interface.

>- 3.2 reinitialization: We (WebRTC editors) have discussed moving the
>"ended" event from the MediaStream to the MediaStreamTrack; it seems to
>be easier to define it crisply there.

That sounds like a good start. The subject of the MediaStreamTrack is 
actually where I believe some of the capture integration points should be 
made, but it will require some additional discreet factoring (i.e., 
MediaStreamTrack -> Audio/VideoTrack [3]).


>- 3.6.3 It's unclear for me what you are pointing to when you refer to
>the WebRTC "take a picture" scenario. I can't find it in our scenarios
>document.

I'll try to clear that up. I was drawing directly from the WebRTC spec,
section 3.3 Examples (third example) [4].


>- 3.10.1 In a streaming case, you may support rewind through recording
>to a tail-drop buffer. So it's not impossible, just complex.


I'll polish that up a bit. Thanks.

[3] http://dev.w3.org/html5/spec/Overview.html#audiotracklist 
[4] http://dev.w3.org/2011/webrtc/editor/webrtc.html#examples

Received on Tuesday, 6 December 2011 19:11:57 UTC