Re: Mediastream Recording implementation

On Wed, Jul 17, 2013 at 11:43 AM, Greg Billock <gbillock@google.com> wrote:

> On Tue, Jul 16, 2013 at 3:54 PM, Robert O'Callahan <robert@ocallahan.org>wrote:
>
>> On Wed, Jul 17, 2013 at 10:33 AM, Greg Billock <gbillock@google.com>wrote:
>>
>>> For access to streaming output of the encoder, ArrayBuffer seems to me
>>> more appropriate. It's compatible with XHR2, File, and direct manipulation.
>>>
>>
>> So are Blobs.
>>
>
> I think a bigger issue is more what else you can do with Blobs, like
> createObjectURL. What should that return for a Blob which is a slice of
> encoded stream data? How will clients detect that such blob urls are
> suitable for some uses but not others? It's currently very easy to do
> ArrayBuffer->Blob, but more involved to do the reverse.
>

This is why I suggested making second and subsequent Blobs have a MIME type
of application/octet-stream. (Or, we could invent something else that's
even less generally usable --- e.g. application/partial-resource.)

I don't think there's a real issue of clients having to detect the
difference between a Blob that's a whole resource and a Blob that's part of
a resource --- they should know, based on how they're using MediaRecorder
--- but using a distinct MIME type for the second and subsequent Blob would
partially address that issue.

There's no spec issue over what createObjectURL should do in any of these
cases. Once we've defined the Blobs' contents and MIME type, everything
else follows.

But that's an artifact of the API design. There could just as easily be two
> methods to accomplish these two use cases. If they're fairly different,
> then JS parameters are not a very strong overloading signal. If they're
> fairly similar, then this is a good solution. So I think this is just
> restating the nub of the problem: figuring out the right intuition about
> how the API should present full-content and sliced encoded data.
>
> Looking elsewhere in webrtc, you can do createObjectURL on a media stream,
> then video/canvas stuff to get bytes out as ArrayBuffer types. Web Audio's
> decodeAudioData method takes an ArrayBuffer, which you're recommended to
> get from XHR using responseType="arraybuffer", and its buffers are
> organized around ArrayBuffer.
>

You can get a Blob as an XHR response, and AudioContext.decodeAudioData
should really have a Blob option. I'm afraid Web Audio's API is poorly
designed in some respects (we're working through some of those issues in
public-audio).

HTMLCanvasElement lets you encoded the contents in an image format via
toData(), which returns a Blob, and there is no comparable method returning
an ArrayBuffer. (CanvasRenderingContext2D.getImageData returns an array of
pixel channel values, but that makes sense since reading and writing
individual pixel channel values is the entire purpose of that API.)

Rob
-- 
Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*

Received on Wednesday, 17 July 2013 00:06:14 UTC