W3C home > Mailing lists > Public > public-script-coord@w3.org > April to June 2010

Re: Adoption of the Typed Array Specification

From: Kenneth Russell <kbr@google.com>
Date: Tue, 18 May 2010 13:35:01 -0700
Message-ID: <AANLkTikyVBs83bmtuohq6cwjweyR30ozWg3m-GBLuY2r@mail.gmail.com>
To: "Mark S. Miller" <erights@google.com>
Cc: Chris Marrin <cmarrin@apple.com>, arun@mozilla.com, public-script-coord@w3.org, Erik Arvidsson <erik.arvidsson@gmail.com>, es-discuss@mozilla.org, Vladimir Vukicevic <vladimir@mozilla.com>
On Tue, May 18, 2010 at 9:20 AM, Mark S. Miller <erights@google.com> wrote:
>
>
> On Tue, May 18, 2010 at 9:12 AM, Chris Marrin <cmarrin@apple.com> wrote:
>>
>> On May 18, 2010, at 12:09 AM, Jonas Sicking wrote:
>>
>> > Resending this now that i'm signed up on the es-discuss list.
>> >
>> > On Thu, May 13, 2010 at 4:57 PM, Erik Arvidsson
>> > <erik.arvidsson@gmail.com> wrote:
>> >> I'm surprised no one has said this yet but here goes:
>> >>
>> >> ArrayBuffer needs to extend Array. In other words instances of
>> >> ArrayBuffer needs to also be instances of Array
>> >>
>> >> var ab = new ArrayBuffer;
>> >> assert(ab instanceof ArrayBuffer);
>> >> assert(ab instanceof Array);
>> >>
>> >> You will also need to make sure that all the internal methods are
>> >> defined. See 8.12 Algorithms for Object Internal Methods of ES5. For
>> >> example what does it mean to do [[Delete]] on a byte array?
>> >
>> > My biggest beef with ArrayBuffer is that since it can be cast between
>> > Int8 and Int16, it exposes endianness of the CPU the code is running
>> > on. I.e. it will be very easy to write code that works on Intel CPUs,
>> > but not on Motorola 68K CPUs.
>> >
>> > I don't have much faith in that website developers will test the
>> > endiannness of the CPU they are running on and manually convert,
>> > rather than cast, if the endianness is "wrong".
>>
>>
>> In developing TypedArrays, we had endless discussions about the endianness
>> problem. We ended up with the conclusion that there is no practical way to
>> avoid it AND keep a reasonable feature set AND not end up with an extremely
>> complex API.
>
> Is a summary of these discussions, and the structure of tradeoffs they
> reveal, written down anywhere? Can they be? These non-portable endianness
> problems are very non-JavaScript-y. If you are asking us to accept these
> problems into JavaScript, we need more than a summary of your conclusions.
> We need to understand why seemingly more pleasant options were not. And we
> need to go over these tradeoffs ourselves. We might have a different
> weighting, and so make different choices.

Producing a complete writeup will take time. The discussions are
archived in Khronos teleconference recordings, meeting minutes and
mailing lists that predate the public WebGL list. There are roughly
200 email messages on this topic spread over several threads, and
looking back through them, some of the biggest decisions were made
during conference calls.

Hopefully the summary below will be of some use.

The design requirements were:

1. Support interleaved, heterogeneous vertex data: for example, three
floats for the position followed by four unsigned bytes for the color.
Interleaved vertex data is required for high performance 3D rendering.

2. Support slicing a large region of memory into smaller chunks. Many
OpenGL programs set up vertices for multiple objects in a single
buffer and upload the buffer to the graphics card with one API call
rather than several.

3. Support the fastest possible dynamic generation of vertices.

The early discussions in the WebGL WG attempted to maintain 100%
safety and correctness for the programmer. One proposal was to drop
support for case (1), which was considered unacceptable. Other
proposals involved specifying the vertex layout. One such proposal
involved a "pattern" array where heterogeneous elements would be
indexed. In example (1), above, patternArray[0..2] would index the
first three floats, patternArray[3..6] would index the subsequent four
unsigned bytes, etc. This design was discarded as unoptimizable.

Another proposal involved creating multiple views from the
specification of the vertex layout. Multiple views would still refer
to the same memory region, but the indexing of each view would be
non-contiguous. In case (1) above, two array-like objects would be
created by specifying this vertex format; call them "floatArray" and
"byteArray". floatArray[0..2] would be contiguous in memory;
floatArray[3] would be discontiguous, and the first coordinate of the
second vertex. byteArray[0..3] would be the four bytes of the first
vertex's color; byteArray[4] would be the first color channel of the
second vertex. This approach was discarded for several reasons,
including unoptimizability and incompatibility with the OpenGL API.

The current TypedArray spec meets all of the requirements of OpenGL
and WebGL programmers. See
https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.13
for the full code of example (1). It would be undesirable to lose any
of the current properties in the interest of protecting the
programmer. Higher-level APIs which are completely safe from
endianness issues can be built in ECMAScript on top of these
primitives.

Note that the typed arrays themselves, since they use the host
platform's native endianness, do not directly support file and network
I/O. File formats specify a particular endianness for multi-byte
values. The DataView interface in the TypedArray spec is an attempt to
address this use case with a different kind of view in which the
methods can still be optimized to a few machine instructions each. The
expectation is that higher-level, stream-like APIs can be built on top
of the DataView in ECMAScript.

-Ken

>> I don't think this will be a problem in practical use. The most common use
>> of TypedArrays will not use ArrayBuffers at all. If programmers start
>> mapping different ranges of an ArrayBuffer to different views, they need to
>> have advanced knowledge and they have to use care. If they choose to
>> "over-map" the same range to different views on purpose, they had better
>> really understand what they are doing. If they do it accidentally, they have
>> made a programming error. If that error results in the wrong data being used
>> due to endianness or some other error, I don't think it matters.
>>
>> There is the issue of the naive programmer over-mapping and having it work
>> on some machines and not on others. But I think those cases will be rare. I
>> think the danger of that is worth the features that over-mapping provides.
>>
>> -----
>> ~Chris
>> cmarrin@apple.com
>>
>>
>>
>>
>> _______________________________________________
>> es-discuss mailing list
>> es-discuss@mozilla.org
>> https://mail.mozilla.org/listinfo/es-discuss
>
>
>
> --
>     Cheers,
>     --MarkM
>
> _______________________________________________
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
>
Received on Tuesday, 18 May 2010 20:35:42 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 8 May 2013 19:30:02 UTC