Re: Specificity in the Web Audio API spec

On Thu, Mar 29, 2012 at 3:33 AM, Jussi Kalliokoski <
jussi.kalliokoski@gmail.com> wrote:

> Hello group!
>
> Now that we have already published the second working draft, I think it
> might be worthwhile starting to clarify some bits in the spec.


I think that's a great idea.


> be consistent. We can't trust that browser vendors get "industry standard"
> algorithms right, especially if the "standard" is vague. Even well defined
> algorithms such as FFT have gone wrong in the past (IIRC a certain Intel
> processor had a bug in its FFT implementation, having severe implications.


You probably meant the fdiv bug?


>
> To demonstrate, let's start with the DelayNode (yes, even something as
> simple as this has a lot of ambiguity):
>
> 1) The time is given in seconds. How is this rounded to samples when
> adjusting in terms of the delay buffer? Should it be nearest neighbor?
> Floored? Or is the buffer fixed size with an adjustable playback rate,
> resampling on the fly? If so, what resampling method should be used?
> 2) "When the delay time is changed, the implementation must make the
> transition smoothly, without introducing noticeable clicks or glitches to
> the audio stream." This part is very vague, what does it mean? Basically,
> you could have the old buffer fade out and new buffer introduced with a
> transition, and it would be according to the spec. Not very desirable. On
> the other hand, the browser could be using exactly the algorithm you had
> planned for it, but if the source data contains clicks, it could be
> interpreted as a bug. So, what should be done to make the transition
> smooth? Should the implementation use a fixed size buffer like I explained
> in the first point?
>

I think you raise some interesting points.  What is the goal here?  Are you
expecting that independent implementations will always produce *exactly*
the same output for the same input?  I don't think the spec is intended to
give a bit-exact implementation across all vendors.  I could be wrong
though; Chris will have the definitive answer.

For your resampling issue, I think that would be a quality of
implementation issue.  A good implementation will do a good job and a bad
implementation will do a not so good job. This allows different vendors to
"compete".  (That's my view point, coming from the cellular industry where
many things are vaguely specified and you have to work hard to figure out
how to make it work.  Perhaps audio is different.)

On the other hand, the webkit implementation is open source for any one to
look at and use, so hopefully the algorithms there are good.


> [1] http://code.google.com/p/chromium/issues/detail?id=117699


I have to say, I agree with the conclusion, but the reasoning seems wrong.
 I'm pretty sure all Intel chips follow the IEEE spec and give errors of <
1ULP for the basic operations, including sqrt.

Ray

Received on Thursday, 29 March 2012 17:31:35 UTC