W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2011

Re: TPAC F2F and Spec Proposals

From: Olli Pettay <Olli.Pettay@helsinki.fi>
Date: Tue, 18 Oct 2011 11:19:48 +0300
Message-ID: <4E9D36A4.2060208@helsinki.fi>
To: Chris Rogers <crogers@google.com>
CC: public-audio@w3.org
On 10/18/2011 03:47 AM, Chris Rogers wrote:
>
>
> On Mon, Oct 17, 2011 at 4:23 PM, Olli Pettay <Olli.Pettay@helsinki.fi
> <mailto:Olli.Pettay@helsinki.fi>> wrote:
>
>     On 10/14/2011 03:47 AM, Robert O'Callahan wrote:
>
>         The big thing it doesn't have is a library of native effects
>         like the
>         Web Audio API has, although there is infrastructure for
>         specifying named
>         native effects and attaching effect parameters to streams. I
>         would love
>         to combine my proposal with the Web Audio effects.
>
>
>
>     As far as I see Web Audio doesn't actually specify the effects in any
>     way, I mean the algorithms, so having two implementations to do the
>     same thing would be more than lucky. That is not, IMO, something we
>     should expose to the web, at least not in the audio/media core API.
>
>
> I'm a bit perplexed by this statement.  The AudioNodes represent
> established audio building blocks used in audio engineering for decades.

Yes, but without saying what the nodes exactly do.

>   They have very mathematically precise algorithms.
I don't see them in the spec.

Even simple thing like DelayNode doesn't say in which way 
"implementation must make the transition smoothly".
And if implementations don't provide the exactly the same
output, using the API in music production would be pretty unreliable.
One couldn't play the same audio data the same way in different
browser.


 > Audio engineering
> and computer music theory has a long tradition, and has been well studied.



>
> Chris
Received on Tuesday, 18 October 2011 08:20:32 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 18 October 2011 08:20:33 GMT