RE: revised recording proposal

Thanks for coming out with the revision.  Apologies if I am repeating any comments provided by other persons.


a)      "This document was published by the Web Real-Time Communication Working Group<http://www.w3.org/2011/04/webrtc/> as an Editor's Draft. If you wish to make comments regarding this document, please send them to public-media-capture@w3.org<mailto:public-media-capture@w3.org> (subscribe<mailto:public-media-capture-request@w3.org?subject=subscribe>, archives<http://lists.w3.org/Archives/Public/public-media-capture/>). All feedback is welcome."



I believe this is a deliverable of the Media Capture TF.  This document is not listed among the deliverables in the WebRTC charter.  I think the Media Capture TF charter (i.e. Robin's email) should be explicitly modified to reflect this deliverable.



b)      I do not believe Recording requirement 12 in http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/scenarios.html#requirements is met by the current specification (at least not without several dependencies on other specifications in various states of stability).  Can you please explain why you think pause/resume methods are not desirable?



c)       2.2 Methods, record:  "If the timeSlice argument has been provided, then once timeSlice milliseconds of data have been colleced, raise a dataavailable event containing the Blob of collected data, and start gathering a new Blob of data."



I am still not convinced time-sliced data is better returned as a blob versus an ArrayBuffer, particularly if latency is an critical concern (i.e. "reliable" streaming).  At very least you will need the extra step of invoking the FileReader I/F (http://www.w3.org/TR/FileAPI/#FileReader-interface) to get at Blob data.



d)      I believe that the UA will not be in compliance with Recording requirement 1 in http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/scenarios.html#requirements if there is no ability to return a file.  The document needs to clarify this.  Perhaps one approach could be that intermediate data returned when a time-slice is specified is in the form of a Blob (or ArrayBuffer), but a File object is returned when endRecording() is invoked.



I also think we shouldn't have a dependency on File Writer to meet Recording requirement 1 (http://www.w3.org/TR/file-writer-api/), which would be necessary if we return a Blob as opposed to a File as far as I can tell.



e)      get/set recording options.  Why is this necessary to expose?  The returned Blob should have the Media type set in the type attribute.



f)       I'd like to see more specificity on the error/warning events in the form of a returned error/warning object.  This practice is consistent with other device API specifications (see for instance http://www.w3.org/TR/2010/CR-geolocation-API-20100907/#position-error).

From: Jim Barnett [mailto:Jim.Barnett@genesyslab.com]
Sent: Friday, November 30, 2012 7:13 AM
To: public-media-capture@w3.org
Subject: revised recording proposal

Here's an updated proposal, which I have checked into mercurial at http://dvcs.w3.org/hg/dap/file/802e29e48f73/media-stream-capture/RecordingProposal.html


-          Jim

Received on Thursday, 6 December 2012 15:49:58 UTC