RE: revised recording proposal

Was thinking more of WebSocket.send() and not XHR, but you are right - the current version of the ws send method does support Blob transmission (http://dev.w3.org/html5/websockets/#dom-websocket-send).  Blob for time-sliced data should work.

From: Travis Leithead [mailto:travis.leithead@microsoft.com]
Sent: Thursday, December 06, 2012 11:38 AM
To: Jim Barnett; Mandyam, Giridhar; public-media-capture@w3.org
Subject: RE: revised recording proposal


a)      2.2 Methods, record:  "If the timeSlice argument has been provided, then once timeSlice milliseconds of data have been colleced, raise a dataavailable event containing the Blob of collected data, and start gathering a new Blob of data."



I am still not convinced time-sliced data is better returned as a blob versus an ArrayBuffer, particularly if latency is an critical concern (i.e. "reliable" streaming).  At very least you will need the extra step of invoking the FileReader I/F (http://www.w3.org/TR/FileAPI/#FileReader-interface) to get at Blob data.

>> Anyone else have an opinion on this?

For sending directly to the server scenarios, XHR supports sending ArrayBufferView or Blob so there's really no difference there. The advantage of the Blob is that (in XHR) the mimeType sent to the server is auto-populated by the Blob's type.

If you also have multiple recorders running in parallel (with different encodings), then it's also helpful to have each blob report its type, rather than having to keep track of the ArrayBuffer's type yourself.

From: Jim Barnett [mailto:Jim.Barnett@genesyslab.com]
Sent: Thursday, December 6, 2012 11:00 AM
To: Mandyam, Giridhar; public-media-capture@w3.org<mailto:public-media-capture@w3.org>
Subject: RE: revised recording proposal

Responses in-line

From: Mandyam, Giridhar [mailto:mandyam@quicinc.com]
Sent: Thursday, December 06, 2012 10:48 AM
To: Jim Barnett; public-media-capture@w3.org<mailto:public-media-capture@w3.org>
Subject: RE: revised recording proposal

Thanks for coming out with the revision.  Apologies if I am repeating any comments provided by other persons.


a)      "This document was published by the Web Real-Time Communication Working Group<http://www.w3.org/2011/04/webrtc/> as an Editor's Draft. If you wish to make comments regarding this document, please send them to public-media-capture@w3.org<mailto:public-media-capture@w3.org> (subscribe<mailto:public-media-capture-request@w3.org?subject=subscribe>, archives<http://lists.w3.org/Archives/Public/public-media-capture/>). All feedback is welcome."



I believe this is a deliverable of the Media Capture TF.  This document is not listed among the deliverables in the WebRTC charter.  I think the Media Capture TF charter (i.e. Robin's email) should be explicitly modified to reflect this deliverable.

>>I wasn't sure what to put here. The Media Capture Wiki says "The Media Capture API<http://dev.w3.org/2011/webrtc/editor/getusermedia.html> (a.k.a. navigator.getUserMedia) is developed by a joint task force between the WebRTC<http://www.w3.org/2011/04/webrtc/> and Device APIs<http://www.w3.org/2009/dap/> Working Groups".   That made it sound to me like there wasn't really a separate entity, and since DeviceAPIs doesn't seem to be involved in Recording, I figured that meant that it was a product of the WebRTC group.  But it can be changed.



b)      I do not believe Recording requirement 12 in http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/scenarios.html#requirements is met by the current specification (at least not without several dependencies on other specifications in various states of stability).  Can you please explain why you think pause/resume methods are not desirable?

>> They can be added.  The semantics would be that the Recorder stops gathering data while  paused.  This would be different from the behavior when a Track is paused  (in that case, the Recorder adds filler - silence or black frames - so that synchronization can be maintained.)



c)       2.2 Methods, record:  "If the timeSlice argument has been provided, then once timeSlice milliseconds of data have been colleced, raise a dataavailable event containing the Blob of collected data, and start gathering a new Blob of data."



I am still not convinced time-sliced data is better returned as a blob versus an ArrayBuffer, particularly if latency is an critical concern (i.e. "reliable" streaming).  At very least you will need the extra step of invoking the FileReader I/F (http://www.w3.org/TR/FileAPI/#FileReader-interface) to get at Blob data.

>> Anyone else have an opinion on this?



d)      I believe that the UA will not be in compliance with Recording requirement 1 in http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/scenarios.html#requirements if there is no ability to return a file.  The document needs to clarify this.  Perhaps one approach could be that intermediate data returned when a time-slice is specified is in the form of a Blob (or ArrayBuffer), but a File object is returned when endRecording() is invoked.



I also think we shouldn't have a dependency on File Writer to meet Recording requirement 1 (http://www.w3.org/TR/file-writer-api/), which would be necessary if we return a Blob as opposed to a File as far as I can tell.



>> But File inherits from Blob, so it would have the same disadvantages.  The FileReader interface lets the app access the Blob's data as arrayBuffers or text, so the app can use any method that is available to it to save the data to a file.  I don't think that it would have to use FileWriter.  I don't think a File has any magic when it comes to persistence.  A File is just a Blob with a couple of extra attributes.  It would still be up to the app to persist it.



e)      get/set recording options.  Why is this necessary to expose?  The returned Blob should have the Media type set in the type attribute.
>> The app needs to know what formats are available, select the format it wants, and set other attributes.



f)       I'd like to see more specificity on the error/warning events in the form of a returned error/warning object.  This practice is consistent with other device API specifications (see for instance http://www.w3.org/TR/2010/CR-geolocation-API-20100907/#position-error).

>> Agreed.  A lot of work is needed here.  (And you've agreed to help do it...)

From: Jim Barnett [mailto:Jim.Barnett@genesyslab.com]
Sent: Friday, November 30, 2012 7:13 AM
To: public-media-capture@w3.org<mailto:public-media-capture@w3.org>
Subject: revised recording proposal

Here's an updated proposal, which I have checked into mercurial at http://dvcs.w3.org/hg/dap/file/802e29e48f73/media-stream-capture/RecordingProposal.html


-          Jim

Received on Thursday, 6 December 2012 19:52:45 UTC