RE: revised recording proposal

  Yes, the events need more work, but for this draft I was thinking that the events would have only the attributes that are defined in the DOM Event spec.  CustomEvent has a .detail attribute where you can stick whatever you want.  I figured that, pending more thought on the subject, we could use that to hold the Blob in the dataavailable event, and some sort of error code in the recordingerror and recordingwarning events.  But at the very least we will need to specify what those error codes will be, and that may lead us to want additional attributes, which would require a new interface definition.

- Jim

-----Original Message-----
From: Martin Thomson [] 
Sent: Friday, November 30, 2012 1:44 PM
To: Jim Barnett
Cc: Mandyam, Giridhar; Timothy B. Terriberry;
Subject: Re: revised recording proposal

One request from me: document the events more thoroughly.  CustomEvent is fine for us clowns in application land, but I'd like to know what attributes are present.

On 30 November 2012 10:37, Jim Barnett <> wrote:
> b. Would the UA be in compliance if it returns timeSliced data in the form of a File object?  I don't believe the spec or requirements have to change as written, because the File object inherits from Blob (and the Blob may be backed by disk data regardless).
>>> Hmm, good question.  If a File is a Blob, I suppose it might be.  What are the gc properties of Files?  I think of a "file" as something that persists on disk even if the app isn't referencing it.  You wouldn't want the file system to fill up when you were doing buffer-at-a-time processing.  On the other hand, I don't know if a File behaves the way I expect a "file" to work.

My take: it would be compliant.  However, the UA would be responsible for managing the lifecycle of the Blob and managing the resources that the Blob consumes. The application would not have to be responsible for cleaning up the filesystem or anything like that.

> c. Should we provide a code example for the first use case below in the spec?  I'm still having trouble seeing how the necessary media processing could be achieved in real-time using the Recoding API.
>>> I don't think code samples belong in a use case doc, because the use case doc doesn't define the API.  We can see what sort of sample code people would like to add to the spec itself, though most examples that I have seen tend to be pretty simple, just illustrating the basic concepts.

The request was for example code in the spec.  To my mind, that would be very, very useful (if not obligatory).

> d. I think we should just be explicit and add an ASR scenario to the 
> doc.  Although I personally think the right way to do this is on a 
> "reliable"  PeerConnection, there is no point in rehashing that 
> debate.  Maybe something along the lines of
> Near-real time Speech Recognition
> So-and-so is interacting with a turn-by-turn navigation website while in the car and requires "hands free" interaction with the website.  Before beginning to drive, he browses to the website and allows the website to capture data from his handset microphone.  He then speaks his destination, and his voice data is sent to a server which processes the captured voice and sends back map images tiles and associated metadata for rendering in the browser.
>>> I'd be happy to add something like this.  Does anyone else have any comments?

That would be a reasonable use case.  That might not be the example that I choose to illustrate in the spec though.


Received on Friday, 30 November 2012 18:48:07 UTC