Re: Proposal for fixing race conditions

On Thu, Jul 18, 2013 at 5:00 AM, K. Gadd <kg@luminance.org> wrote:

> I feel I have to take issue with this quote in particular, even though I
> don't necessarily disagree with the conclusion you're drawing here:
>
>
> "As the API is designed and has been used for over 2 years, these calling
> patterns are not used and so simply are not an issue.  We do have
> substantial developer experience to support this view, and these developers
> come from a wide range of backgrounds and experience levels from complete
> novices playing with audio for the first time, all the way to seasoned
> professional audio developers."
>
> If two years of use and experience were sufficient, one could have assumed
> that HTML would never contain images (IIRC <img> was added 2-3 years after
> the original development/specification of HTML). The web platform has
> evolved in (often unpredictable) steps over time and will continue to
> evolve. I do not think it is reasonable to argue that 2 years of the Web
> Audio API's availability - and we should be clear that in this context
> 'availability' simply means 'we shipped a prefixed version of an API
> resembling this one, in one browser' - is sufficient to identify and
> predict any potential issues with the specification, in any area. In an
> area like thread safety (not to mention developer friendliness,
> predictability of results, etc.) I think additional caution is always
> merited.
>

It's not just one browser, it is three browsers that I know of: Chrome
(mac/win/linux/chromeos/android), Safari (desktop/iOS), and Epiphany
GTK/Gnome

Each of them has substantially different audio back-end code, and there are
two completely different JS engines where it has been implemented (V8 and
JSC).  In the case of Chrome versus Safari, there are two completely
different sandboxing architectures (radically different and complex each in
their own right).  This shows that the API is implementation viable in a
very wide range of environments.

Perhaps you're right that it would be better to have 3 or 4 years of
experience, or more, but 2 years is significant and substantial, and cannot
be simply dismissed as a limited implementation used by a handful of
developers.  The general feedback we've gotten from developers and users
has been quite positive, and they're very eager and excited to see that
Mozilla is now working on an implementation too.



>
> If you simply wish to argue that the effects of these races can be proven
> to always be limited to corrupt audio output, that diminishes the risk
> significantly, so we're probably fine. However, is corrupt audio output
> actually okay? Out of all the users who've been using the API for 2 years,
> how many of them would consider it acceptable if end-users heard garbled
> noise on some machines/configurations or in certain use cases? As I pointed
> out, the presence of OfflineAudioContext effectively guarantees that at
> some point a developer will use Web Audio to generate samples for offline
> use, and rely on the quality of the results. At that point, I think it is
> impossible to argue that corrupt audio is acceptable, especially if it is
> intermittent depending on thread timing. The <audio> implementation
> problems Chrome has had in the past serve as a useful guidepost here, if we
> observe the way developers and end-users reacted to the corrupt audio
> produced in those situations.
>

I don't consider garbled noise as a good outcome for any audio system if
the API is used normally.  Nobody wants to hear glitches and audio breakup
if they're just trying to play some content in an <audio> element, for
example.  I think you and I, and probably everyone else wants to get the
best resilience to glitches, etc. on as wide a range of devices as
possible.  Of course, different operating systems present different
low-level audio APIs, some of which are more resilient than others.  And
there are also quite marked differences in the speed/power from low-end
mobile devices all the way up to powerful desktop machines in terms of CPU
performance.  Nobody can guarantee that arbitrarily complex audio
processing can playback in realtime on the very slowest mobile devices any
more than we can guarantee buttery smooth 60fps million polygon WebGL
rendering.  What we can do as browser implementors is to optimize as well
as we can given all of these limitations.  And so far in the current
WebKit/Blink implementations I would say that we have made substantial and
very significant optimizations.

Aside from performance issues in any particular browser, if a developer
makes a mistake in code, then an incorrect audio result can be rendered,
perhaps white noise will be the result instead of a nice drum pattern.
 Mistakes can always be made in code which will render the wrong result -
maybe you wanted to draw a large green circle, but instead a small red
circle was drawn - if so, then you go back and find the bug.  I believe the
Web Audio API in its current form allows for many possibilities, both
simple and complex, and it is about as easy to use as it could be given all
the jobs it has been asked to solve.

Robert O'Callahan's artificial and very unrealistic example of modifying
audio buffer data right after playing it is very unconvincing as the type
of problem which will be worrisome for users of the Web Audio API.

Chris

Received on Thursday, 18 July 2013 18:54:53 UTC