Re: revised recording proposal

One request from me: document the events more thoroughly.  CustomEvent
is fine for us clowns in application land, but I'd like to know what
attributes are present.

On 30 November 2012 10:37, Jim Barnett <Jim.Barnett@genesyslab.com> wrote:
> b. Would the UA be in compliance if it returns timeSliced data in the form of a File object?  I don't believe the spec or requirements have to change as written, because the File object inherits from Blob (and the Blob may be backed by disk data regardless).
>>> Hmm, good question.  If a File is a Blob, I suppose it might be.  What are the gc properties of Files?  I think of a "file" as something that persists on disk even if the app isn't referencing it.  You wouldn't want the file system to fill up when you were doing buffer-at-a-time processing.  On the other hand, I don't know if a File behaves the way I expect a "file" to work.

My take: it would be compliant.  However, the UA would be responsible
for managing the lifecycle of the Blob and managing the resources that
the Blob consumes. The application would not have to be responsible
for cleaning up the filesystem or anything like that.

> c. Should we provide a code example for the first use case below in the spec?  I'm still having trouble seeing how the necessary media processing could be achieved in real-time using the Recoding API.
>>> I don't think code samples belong in a use case doc, because the use case doc doesn't define the API.  We can see what sort of sample code people would like to add to the spec itself, though most examples that I have seen tend to be pretty simple, just illustrating the basic concepts.

The request was for example code in the spec.  To my mind, that would
be very, very useful (if not obligatory).

> d. I think we should just be explicit and add an ASR scenario to the doc.  Although I personally think the right way to do this is on a "reliable"  PeerConnection, there is no point in rehashing that debate.  Maybe something along the lines of
>
> Near-real time Speech Recognition
>
> So-and-so is interacting with a turn-by-turn navigation website while in the car and requires "hands free" interaction with the website.  Before beginning to drive, he browses to the website and allows the website to capture data from his handset microphone.  He then speaks his destination, and his voice data is sent to a server which processes the captured voice and sends back map images tiles and associated metadata for rendering in the browser.
>
>>> I'd be happy to add something like this.  Does anyone else have any comments?

That would be a reasonable use case.  That might not be the example
that I choose to illustrate in the spec though.

--Martin

Received on Friday, 30 November 2012 18:44:14 UTC