- From: K. Gadd <kg@luminance.org>
- Date: Tue, 14 Jan 2014 13:28:11 -0800
- To: Marcus Geelnard <mage@opera.com>
- Cc: Paul Adenot <padenot@mozilla.com>, Chris Wilson <cwilso@google.com>, Jukka Jylänki <jujjyl@gmail.com>, "public-audio@w3.org" <public-audio@w3.org>
- Message-ID: <CAPJwq3UJ67532LcOOO8wE=Bw2jiAobHhh5Js_5zs2AW-bR49_w@mail.gmail.com>
On Fri, Jan 10, 2014 at 12:53 AM, Marcus Geelnard <mage@opera.com> wrote: > I agree with Chris here. In general, an implementation has a much better > chance of making an informed performance trade off decision than a Web > developer. In most situations I think that int16 would save memory, but add > some extra CPU overhead. In other words, if a client has loads and loads of > memory, it could for instance decide to take the memory hit and convert to > Float32 internally in order to lower the CPU load, either up front or > lazily. > > If a Web page was to make the trade off decision, it would have to know a > lot more about the device it's running on (such as CPU performance, > available memory, etc), and since that is not the case, I fear that Web > devs would resort to things like UA sniffing etc to make distinctions such > as "desktop" vs "fast-tablet" vs "slow-phone", etc. We're not talking about web developers that know nothing about performance, though. This is specifically a use case where developers need control over memory representation in order to achieve something approximating performance parity with native audio applications. int16 samples aren't ever going to be the default (right???) so it's not as if the average web developer's going to shoot themselves in the foot without realizing it - if they opt into int16 in a way that harms them, that is unfortunate, but it doesn't mean that the actual use cases for int16 aren't justified. If int16 buffers don't offer something approximating actual guarantees, you haven't fixed anything - that native port will still have to assume the worst (i.e. using 2x as much memory) and be rewritten to work with a tiny address space, making your int16 buffer optimization nearly meaningless - sure, the mixer might be slightly faster/slower and the process's resident memory use will be lower, but it won't enable any new use cases and certain ports will still be out of the question. P.S. on this subject, does Web Audio allocate memory out of the JS heap in Safari, Chrome or Firefox? It's already bad enough that in many cases it's not possible to allocate more than 1GB of RAM in user javascript; if Web Audio is eating away at that with 32-bit samples... > > > This is a slightly different issue, namely "What's the lowest quality I > can accept for this asset?". I can see a value in giving the Web dev the > possibility to indicate that a given asset can use a low quality internal > representation (the exact syntax for this has to be worked out, of course). > The situation is somewhat similar to how 3D APIs allow developers to use > compressed textures when a slight quality degradation can be accepted. For > audio, I think that sounds such as noisy or muddy sound effects could > definitely use a lower quality internal representation in many situations. > The same could go for emulators that mainly use 8/16-bit low-sample-rate > sounds. > 22khz isn't 'low quality internal representation'; if the signal is actually 22khz I don't know why you'd want to store it at higher resolution. Lots of actual signals are at frequencies other than 48khz for reasons like reproducing the sound of particular hardware or going for a certain effect. (Also, isn't the mixer sampling rate for web audio unspecified - i.e. it could be 48khz OR 44khz? given this, it makes sense to let users provide buffers at their actual sampling rate and be sure they will be stored that way.) The idea of handing Web Audio a 22khz buffer, the implementation upsampling it to 48khz, and then sampling it back down to 22khz for 22khz playback is... unfortunate. > Having such an option in the API gives the implementation an opportunity > to save memory when memory is scarce, but it's not necessarily forced to do > so. > The whole point is to force the implementation to save memory. An application that runs out of memory 80% of the time is not appreciably better than one that does so 100% of the time - end users will consider both unusable. On this whole subject it is important to realize that when talking about developers porting games and multimedia software from other native platforms, it is usually not wise to assume they are idiots that will shoot themselves in the foot. Yes, developers make mistakes, and they ship broken software that relies on bugs in browser implementations - I can understand the reluctance to give developers more ways to make mistakes. In these scenarios, we have working applications that do interesting things on native platforms, and if you significantly undermine the Web platform's ability to deliver parity in these scenarios, you're not protecting native app developers from anything, all you're doing is keeping them off the Web and stuck in walled garden App Stores. Games and multimedia software have an established history of using many of the techniques developers ask for over the course of *decades*, and many of those techniques have been kept because they have proven advantages in terms of performance, ease-of-use, and reliability.
Received on Tuesday, 14 January 2014 21:29:22 UTC