W3C home > Mailing lists > Public > whatwg@whatwg.org > March 2012

[whatwg] Endianness of typed arrays

From: Boris Zbarsky <bzbarsky@MIT.EDU>
Date: Wed, 28 Mar 2012 02:13:40 -0700
Message-ID: <4F72D644.7030006@mit.edu>
On 3/28/12 2:04 AM, Jonas Sicking wrote:
> Consider a big-endian platform where both the CPU and the GPU is
> big-endian. If a webpage writes 16bit data into an ArrayBuffer and
> then sends that off to the GPU using WebGL, the data had better be
> sent in big-endian otherwise the GPU will interpret it wrong.
>
> However if the same page then writes some 16bit data into an
> ArrayBuffer and then looks at its individual bytes or send it across
> the network to a server, it's very likely that the data needs to
> appear as little-endian or site logic might break.
>
> Basically I don't know how one would write a modern browser on a
> big-endian system.

What one could do is to store the array buffer bytes always as little 
endian, and then if sending to the GPU byte-swap as needed based on the 
API call being used (and hence the exact types the GPU actually expects).

So basically, make all JS-visible state always be little-endian, and 
deal in the one place where you actually need native endianness.

I believe that was substantially Robert's proposal earlier in this thread.

-Boris
Received on Wednesday, 28 March 2012 02:13:40 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:40 UTC