W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: New proposal for fixing race conditions

From: Marcus Geelnard <mage@opera.com>
Date: Fri, 26 Jul 2013 21:06:01 +0200
Message-ID: <CAL8YEv5T906wq77=fUQMuBzE+EVRnfxyTZGhEFD_a+yAuDGJag@mail.gmail.com>
To: Chris Wilson <cwilso@google.com>
Cc: Ehsan Akhgari <ehsan.akhgari@gmail.com>, "Robert O'Callahan" <robert@ocallahan.org>, Jer Noble <jer.noble@apple.com>, Russell McClellan <russell@motu.com>, WG <public-audio@w3.org>
On Fri, Jul 26, 2013 at 6:14 PM, Chris Wilson <cwilso@google.com> wrote:

> First, I apologize for dropping this discussion for a couple of days -
> busy with other things, then got sick and dropped everything.
> On Tue, Jul 23, 2013 at 2:52 PM, Marcus Geelnard <mage@opera.com> wrote:
>> In other words 2x memcpy of the buffer would amount to < 0.4% of the
>> total processing time, and that's for a very trivial operation (for more
>> complex processing, the memcpy would be even less noticeable).
> As noted elsewhere - the performance is a minor concern compared to memory
> footprint and hoops-developers-have-to-jump-through.

Which is exactly what I stated here too - just wanted to share some real
figures to get it out of the way...

>> True, this *might* be a problem, but if you're creating an app that even
>> comes close to using 50% of your available memory for audio buffers alone,
>> I'd say that you're in deep trouble anyway (I think it will be very hard to
>> make such an app work cross-platform).
> I could easily write an app that works fine on a desktop, but is
> memory-contrained in mobile devices.

...and if you made it work without the copying overhead on one device,
you'd still have trouble on another device with slightly less memory (e.g.
280 MB available instead of 380 MB). In any case, you'd either have to make
two versions of your app (very-low-end and high-end), or you'd have to come
up with an app design that scales well to low end devices, in which case
the memcpy issue shouldn't be a real problem after all.

> In fact, here's another thought: With Jer's proposal an implementation is
>> no longer forced to using flat Float32 arrays internally, meaning that it
>> would be possible for an implementation to use various forms of compression
>> techniques for the AudioBuffers. For instance, on a low memory device you
>> could use 16-bit integers instead of 32-bit floats for the audio buffers,
>> which would *save* memory compared to the current design (which prevents
>> these kinds of memory optimizations).
> THAT sounds like something that would create lots of cross-implementation
> behavior differences, and should be avoided.  Or may that's just me.

I agree - it would probably have to be spec:ed (either as an explicit
interface, such as
http://www.opengl.org/sdk/docs/man/xhtml/glCompressedTexImage2D.xml , or as
a statement about what quality/precision can be expected from an
AudioBuffer, or something in between).

By using a design that requires a copy when interacting with an
AudioBuffer, such functionality would be very easy to add. On the other
hand forcing AudioBuffer arrays to be shared with the audio engine makes
such future extensions/optimizations much harder.

>> I guess this is what is dividing the group into two camps right now.
>> Personally, I'd much rather go with the potential memory increase than
>> breaking typed arrays (i.e. I'm in the "let's keep the Web webby" camp).
> Please be more concrete in the low-level principles being violated,
> because of course no one would vote against "keeping the Web webby".  But I
> could also say protecting developers from having to deal with low-level
> memory issues is what makes the Web "webby".

Yes, the current model of "the Web" (there's probably a better term) is
just that: protecting developers from having to deal with low-lever memory
handling. The first part of it is that JavaScipt is a garbage collected
language. The other part is that there is no way whatsoever to share data
between threads in an uncontrolled manner.

Until we had Web Workers, there was only a single thread (the main thread),
and you were guaranteed that no data would change during one JS event
(which is why, for instance, JS events are uninterruptible). When Workers
entered the scene, the solution to information exchange was message passing
(using cloning, and later neutering), again meaning no shared data between
threads, and a preserved execution model.

The current suggestion for introducing shared memory between threads (even
if it's "just" between the main JS thread and the audio thread(s)), will
shake that very foundation of the JS execution environment. From the Web
Audio API point of view, it might seem like a minor problem, but I fear
that others will not see it that way.

There are at least two important sides to this:

1) Since the rules of non-volatility no longer would apply to typed arrays,
there may be consequences outside of the Web Audio world - and honestly, we
can't deal with those issues ourselves but need to involve external
interests. For instance, I know that things such as eval() and
setters/getters affect the every day work of ECMAScript engine developers -
why wouldn't volatile typed arrays? Also, I fear that some other specs
would no longer be correct, and may have to be re-written. We can't naively
assume that volatile arrays will not affect any part of the Web platform
other than the Web Audio .

2) Once the first sanctioned instance of volatile typed arrays hits the
Web, this will significantly lower the barrier for other APIs to consider
it as a valid solution ("because the Web Audio API could do it, we can
too"), and I'm pretty sure that would be A Bad Thing (TM).

My fears may be exaggerated, but my point is that it's pointless to debate
further within this group whether it's OK or not to have data races - we
need external assistance to move forward (such as involving TAG, as has
just been done).


> -C
Received on Friday, 26 July 2013 19:06:28 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:23 UTC