答复: updates to requirements document

Ok, I misunderstand your request,sorry:).
But I think media capture scenario mainly focus on requirement on capture and things around it.
What you are requesting is discussed in html-media tf of html wg,
There is a draft called media source[1], which goal is:

1)Allow JavaScript to construct media streams independent of how the media is fetched.
2)Define a splicing and buffering model that facilitates use cases like adaptive streaming, ad-insertion, time-shifting, and video editing.
3) Minimize the need for media parsing in JavaScript.
4) Leverage the browser cache as much as possible.
5)Provide byte stream definitions for WebM & the ISO Base Media File Format.
6)Not require support for any particular media format or codec.

You can reference it below
[1] http://dvcs.w3.org/hg/html-media/raw-file/tip/media-source/media-source.html


Can this meet your requirement?

Yang
Huawei

发件人: Young, Milan [mailto:Milan.Young@nuance.com]
发送时间: 2012年7月6日 13:48
收件人: Sunyang (Eric); Jim Barnett; public-media-capture@w3.org
主题: RE: updates to requirements document

Yes, there are several examples of the Application instructing the UA to do something with the media as a whole.  For example, sending it to a local canvas or sending it to a peer via WebRTC.  But what I’m asking for is the ability for the Application to get a handle on the actual bits within that stream.  This includes cases where there is no other destination except the Application reading the frames and performing its own recording or analysis.

If I’ve missed that in this or any other spec, please let me know.  If not, do you have any objections to my request?  Would you prefer alternate language?

Thanks

From: Sunyang (Eric) [mailto:eric.sun@huawei.com]
Sent: Thursday, July 05, 2012 6:20 PM
To: Young, Milan; Jim Barnett; public-media-capture@w3.org
Subject: 答复: updates to requirements document

For your request “The UA must allow the Application to access an encoded representation of the media while capture is in progress”, I think current UA already support this, so during capture, UA can fetch the data from buffer to render in video element. So what’s the new point here?

And for “ UA will not always explicitly handle media transfer”, I think it cannot be inferred from your request, but I agree on this, media capture scenario should focus on capture but not upload or download, Jim, am I right?

Yang
Huawei

发件人: Young, Milan [mailto:Milan.Young@nuance.com]<mailto:[mailto:Milan.Young@nuance.com]>
发送时间: 2012年7月6日 7:37
收件人: Jim Barnett; public-media-capture@w3.org<mailto:public-media-capture@w3.org>
主题: RE: updates to requirements document

Hello Jim, thanks for putting this together.

The 1st requirement under REMOTE MEDIA currently states: “The UA must be able to transmit media to one or more remote sites and to receive media from them.”  My concern is that the language is insufficient to handle all of the scenarios put forward in the section titled “Capturing a media stream” under “Design Considerations and Remarks”.  These are:

1)      capture a video and upload to a video sharing site

2)      capture a picture for my user profile picture in a given web app

3)      capture audio for a translation site

4)      capture a video chat/conference

The first two transfer types would typically be handled as a bulk transfer after capture completes, which is a good fit for conventional transports like HTTP.  The fourth type is an obvious match to WebRTC.  The third type is a mix of the two.  The application prefers real time transmission, but is probably willing to sacrifice a few seconds of latency in the interest of reliable transport.  Something like an application-specific streaming protocol over WebSockets seems appropriate.

My request could be satisfied with the following new requirement: “The UA must allow the Application to access an encoded representation of the media while capture is in progress.”  Implicit in this request is that the UA will not always explicitly handle media transfer, but I think that could be inferred from the other requirements.

Does this sound reasonable?

Thanks


From: Jim Barnett [mailto:Jim.Barnett@genesyslab.com]<mailto:[mailto:Jim.Barnett@genesyslab.com]>
Sent: Tuesday, July 03, 2012 6:36 AM
To: public-media-capture@w3.org<mailto:public-media-capture@w3.org>
Subject: updates to requirements document

I have filled out the  requirements section in the use case document (http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/scenarios.html)  and added links from the scenarios to the requirements. I have not modified any existing content or taken anything out of the document.

There’s still more work to do:

1) there are some free floating requirements that were suggested on the list but not incorporated in any of the scenarios.  Do we want to incorporate them into the scenarios or leave them as is?
2)  The scenarios contain lists of items that are similar to the requirements.  Do we want to remove them, or leave them in and modify them to match the requirements more closely?
3) I have organized the requirements into four classes: permissions, local media, remote media, and media capture.  Maybe it would  be better to have a different classification or a single list.

Let me know what you think.


-        Jim

Received on Friday, 6 July 2012 06:04:22 UTC