Re: The future and direction of web audio

It makes sense as computational power increases over time, these
browser limits can be increased accordingly. Jussi raises a good
point, I think considering a maximum that the music industry would
feel comfortable with is an important choice.

Chris, when you say the sample rates are quite huge, do you have an
estimate as to how huge? For example: I am used to using 96Khz audio
with my DAW and software synthesizers (I know some people go higher
than this still, though there is some falloff in usage at this level
due to the cost of hardware that can handle the computations).

Another issue to take into consideration is 64bit. A lot of serious
music producers choose to *only* to work with 64bit audio to keep the
nuances in their sounds as close to their original recordings as
possible. Where do we stand with regards to 64Bit ?

Alistair




On Wed, Jul 13, 2011 at 3:12 PM, Chris Rogers <crogers@google.com> wrote:
>
>
> On Tue, Jul 12, 2011 at 10:27 PM, Jussi Kalliokoski
> <jussi.kalliokoski@gmail.com> wrote:
>>
>> Hello guys,
>> Hope you're all having a nice summer! (Get well, Al!) I feel a bit like a
>> broken record for going on with this again, but bare with me.
>> I think there's an important point we need to think about when we're
>> designing this API. If we do this wrong, so many people are going to be
>> really mad at us. Audio APIs have the problem that if you can't do much with
>> the core, you can't do much with the rest either. That's why I've been
>> touting "freedom, freedom..." for some time now. At first, I think it's of
>> utmost importance that we stop premature optimization with this API, it is
>> going to shoot us in our backs, there's no question of it. So next time
>> we're suggesting taking away a freedom – such as the programmer being able
>> to decide the sample rate – think of the justifications. If it's
>> performance, think again, not good enough, not future-proof. Computers are
>> going to get faster, that's inevitable. (Once the graphene sheets hit the
>> market, the processing power of consumer computers is forecast to be
>> 300-3000x the current) Twenty years from now, if people are still using this
>> API, they won't be damning it because it isn't fast enough, but because they
>> can't do what they want with it.
>> I'm not saying we shouldn't keep performance in mind, it's an important
>> thing, but I think no compromises should be made in the name of performance.
>> Same goes a bit for implementation difficulty, but of course, some of us are
>> going to be the ones writing the implementations as well, so I understand
>> you guys disagreeing on such points. This means I have nothing against the
>> idea of pre-defined standard purpose processing nodes, to serve as the basis
>> for games, etc, and to provide extra performance, they're not hurting any
>> freedoms I'm seeing. However, Chris mentioned earlier something about adding
>> a reasonable maximum value for the sample rate and unless there's a good
>> reason for it, I'm not convinced that such a thing should exist.
>
> Concerning sample-rates, the range I'm thinking could be quite large such as
> 11KHz to 384KHz, but at some point when you're writing code in C or C++
> which can crash, lock up the machine, or have security issues then you have
> to protect against wild values such as 10000000KHz and so on.  Believe me,
> we're very conscious of such possibilities as browser implementors.
> If you look at the WebGL specification, there all all kinds of limits, see:
>     const GLenum MAX_VERTEX_ATTRIBS               = 0x8869;
>     const GLenum MAX_VERTEX_UNIFORM_VECTORS       = 0x8DFB;
>     const GLenum MAX_VARYING_VECTORS              = 0x8DFC;
>     const GLenum MAX_COMBINED_TEXTURE_IMAGE_UNITS = 0x8B4D;
>     const GLenum MAX_VERTEX_TEXTURE_IMAGE_UNITS   = 0x8B4C;
>     const GLenum MAX_TEXTURE_IMAGE_UNITS          = 0x8872;
>     const GLenum MAX_FRAGMENT_UNIFORM_VECTORS     = 0x8DFD;
> These values can vary depending on implementation (but must support at least
> a certain amount) and can be accessed something like this:
> glContext.getParameter(glContext.MAX_TEXTURE_SIZE);
> The good news is that the limits on sample-rate can be quite huge to allow
> for just about any conceivable application, and this range could be extended
> as necessary in the future (without changing API).  I'm not suggesting an
> API which locks us into anything, but individual implementations will have
> limits.  We can use WebGL as a guide for how such limits are handled.
> Chris
>

Received on Wednesday, 13 July 2011 20:05:48 UTC