Re: Should send() be able to take an ArrayBufferView?

On Wed, Apr 11, 2012 at 8:15 PM, Boris Zbarsky <bzbarsky@mit.edu> wrote:

> On 4/11/12 5:54 PM, Charles Pritchard wrote:
>
>> Note that those have different performance characteristics, too; the
>>> latter involves a buffer copy.
>>>
>>
>> Are we stuck with a buffer copy (or copy on write) mechanism anyway?
>>
>
> Yes-ish; the question is how many copies there are.


Well, that's a big "or".  If your ArrayBuffer implementation supports COW,
then in most cases you can avoid making any copies at all (well, until it
reaches the network layer).  If it doesn't, you're probably stuck with at
least one copy when send() is made.  (But, as you said, if you don't have
COW and you have to create a new ArrayBuffer, then you're looking at one
*more* extra copy either way.)


On Wed, Apr 11, 2012 at 8:21 PM, Jarred Nicholls <jarred@webkit.org> wrote:

> I haven't really either, but if there were some peer-to-peer support,
>> then the receiving peer should still get an ArrayBuffer even if the
>> sender sent an ArrayBufferView.
>>
>
>
> Yes, this is the only approach that would make sense to me.  The receiver
> is just getting a dump of bytes and can consume them however it sees fit.
>  The view makes no difference here.
>

That's not really what happens, though.  WebSocket gives you an ArrayBuffer
if the source is an ArrayBuffer, and a Blob if the source was a Blob, even
though both are really just a bundle of bytes.  That's surprising to me; my
first impression is the receiver should say which one it wants, as with how
XHR works.  (I havn't used WebSocket in practice, though, so take that
first impression with a grain of salt.)

-- 
Glenn Maynard

Received on Thursday, 12 April 2012 01:51:41 UTC