Re: MediaRecorder and using Streams

Hi Jim!

Thank you for your feedback! You're right that the use of Streams certainly
has some issues that need to be thought through, but please keep in mind
that this was just a rough sketch. We wanted to know if there were any
fundamental objections to Streams before we progressed. I take the absence
of loud protests as evidence there possibly aren't :)

To address the points your post raised:

- Streams API unstable
I don't think we're in a particular hurry, so we certainly should
deliberate carefully if Streams is the way to go. But if it seems to be a
viable option, I think we should explore the MediaRecorder/Streams combo
further and use the resulting feedback to help shape the Streams API. Even
if it turns out it's not a fit for MediaRecorder, it'll benefit Streams.

As for implementation issues, I do think that the current API can be
converted to a Streams based one via shims and vice versa, so it's
certainly possible to experiment in either direction without creating a lot
of undue churn for UA implementers.

- specifying buffer size/timeslice
It's interesting that you consider timeslices in terms of buffer sizes.
Depending on the format, there might or might not be a predictable relation
between the two. I'm wondering if we need to consider both use cases.
(Specific size & specific timeslice)
I'd prefer to not encode the timeslice based approach in the API. Both
because I think there's a need for size and time based approaches, and
because it introduces another different timer-based API. JS does provide
timers already, so I don't think it's necessary to do so.

- Examples of the new API/simplicity

I see Greg has already addressed that part. I'll try to create a quick one
for your specific use case. (Using syntax akin to repeat.js for simplicity)

Stream s=recorder.record();
Repeat(function() {
  b = s.readAsBlob(s.available));  // available is current unread stream
content - not yet part of Streams
  ... // do speech engine things.
}).every(200, 'ms');


 - rachel

P.S.: Apologies for the double post.

On Thu, Aug 1, 2013 at 7:21 AM, Jim Barnett <Jim.Barnett@genesyslab.com>wrote:

>  If the Stream API is unstable, as Anne indicates, then it’s too soon to
> move to it.  This doesn’t mean that we couldn’t do so later.  We’re in a
> hurry to get out Media Capture and WebRTC 1.0, but MediaRecorder will lag
> behind, so there’ll be a longer time in which to consider changes.****
>
> ** **
>
> Most of your arguments for the Stream API seem to be based on
> implementation considerations inside the UA,  so I’m not qualified to
> evaluate them.  Genesys will be a consumer, not an implementer of
> MediaRecorder.  Our main use case is grabbing buffers of audio data and
> sending it off to a remote speech recognizer.  For that purpose, I find the
> Blob API/dataavailable event somewhat easier to use.  You call record
> specifying the buffer size (200ms is a fairly common sample size for speech
> rec engines), and then handle the dataavailable events as they arrive,
> stuffing the data into a socket.  With the Streams API, it looks like you
> have to keep calling readAsX and then blocking till the appropriate amount
> of data becomes available.  That’s not a fatal objection, of course, but it
> may give you some idea why we structured the API they way we did.  ****
>
> ** **
>
> I’d like to hear what other implementers think of this.****
>
> ** **
>
> **-          **Jim****
>
> ** **
>
> *From:* groby@google.com [mailto:groby@google.com] *On Behalf Of *Rachel
> Blum
> *Sent:* Wednesday, July 31, 2013 3:06 PM
> *To:* public-media-capture@w3.org
> *Subject:* MediaRecorder and using Streams****
>
> ** **
>
> Greg Billock and I have been taking a look at the MediaRecorder API for
> Chromium, and we were wondering if there’s a reason MediaRecorder doesn’t
> use the Streams API. Intuitively, it seems to make sense, since the end
> product is an encoded video stream.****
>
> ** **
>
> **·         **There seem to be two different use-cases, sliced recording
> and one-shot recording****
>
> **·         **These two cases face different resource constraints. Sliced
> recording is presumably used mostly for streaming purposes, and thus a bit
> more sensitive to resource constraints.****
>
> **·         **The current API can easily create an unbounded number of
> blobs.****
>
> **·         **The use of blobs in ondataavailable raises the interesting
> question of the proper mime type for each blob.****
>
> **·         **The possibility automatically switching to disk-backed
> blobs raises another interesting question, that of quota interaction.****
>
> ** **
>
> Given these issues, we’ve looked at an alternative approach using Streams
> instead. Specifically, the API would be restructured to allow the following
> calls:****
>
> ** **
>
> Stream recordToStream(optional StreamBuilder b);****
>
> void recordToBlob();****
>
> ** **
>
> recordToBlob covers the use case of recording in one go - it will simply
> have a finalized Blob available when onstop() is fired. ****
>
> ** **
>
> recordToStream() offers the ability for timesliced recording. It will
> build a Stream populated with the encoded data, either using an internally
> constructed StreamBuilder or using the passed-in StreamBuilder.****
>
> ** **
>
> Here are the advantages as we see them:****
>
> ** **
>
> 1) It separates out the two use cases of sliced vs. non-sliced recording.
> The UA is in full control of use of memory vs. disk storage for the
> one-shot recording..****
>
> 2) It is more resilient against accidental blob leakage. Instead of having
> to decide for each blob if it should have disk backing or not, this is up
> to the Stream implementation. Users are guided more naturally into not
> creating code that queues up blobs that it fails to consume at sufficient
> speed - this buffering is all more naturally left to the Stream object.***
> *
>
> 2a) That means the UA can implement “smart” streams that react accordingly
> to resource constraints - discard data up to the next I-frame, replace
> frames with small placeholder frames to indicate they were skipped, etc.**
> **
>
> 3) Users can still get more control over the creation of the stream if
> they pass a StreamBuilder.****
>
> 3a) A lot of the error handling and event processing move into the stock
> File API object.****
>
> 3b) A user-built StreamBuilder writing to long-term storage has more
> predictable interaction with the quota system for disk-stored recordings
> without needing to move the Blob resulting from one-shot recording.****
>
> 4) There is no type mystery any more - there is exactly one definitive
> type, and it is stored on the Stream. No need to deal with data fragments
> of unknowable type.****
>
> ** **
>
> If recordToStream is not given the optional StreamBuilder parameter, the
> UA can use an internal implementation.****
>
> ** **
>
> Would love to hear your thoughts,****
>
> - rachel****
>

Received on Thursday, 1 August 2013 18:24:42 UTC