Re: Questioning the current direction of the Web Audio API

Hello,

I would also like to introduce myself and join the conversation. I'm the creator of pedalboard.js, the open source guitar effects framework. I have to say that I'm amazed at what Web Audio API can do. pedalboard.js is entirely based on current AudioNodes and I can say that it's fairly sufficient.

I'm also using it alongside with WebRTC at pedals.io for online jamming. Actually I'm working on a product just like the one defined on 2.3 Online music production tool. I can say that for most of the functionality, AudioNodes are efficient. Yeah, for "generative" music production the API is kind of raw. But then there are effects and routing and it's another important part of the API, which I can say is handled very well. Therefore there are huge opportunities and application possibilities with the current functionality and I can just say it will get better.

Cheers,
Armagan



On Oct 18, 2013, at 6:10 PM, Hongchan Choi <choihongchan@gmail.com> wrote:

> Hello All,
> 
> I am Hongchan, the author of 'cruelly' lacking and 'ugly' WAAX. Since my work is brought up in the topic, I guess I have to defend myself somehow.
> 
> I went down the same path with OP at some point, that was simply because I have been a computer musician myself over a decade. It is all about experiments and I am very well aware of that.
> 
> First two revisions of the library were completely based on ScriptProcessorNode - had to dump them all because they were not usable in the real-world production. That was the moment I changed the goal and the design; something that runs without glitches.
> 
> Now I have been working on this API for a while (even with Chris Rogers himself), I just can't say everything has been failed. I would say this API is built for the production. On that note, it is just not as useful as PD, SC, or ChucK for the experimental purposes.
> 
> Currently I am refining the latest revision (r13) of WAAX and we (me and Chris Rogers) had put some ideas into it in order to implement essential building blocks solely based on the native nodes by utilizing Web Audio API in different ways. This is not public yet, and hopefully I can wrap up the long-overdue documentation. 
> 
> I am certain that there are many things we can achieve on top of the current design of Web Audio API. I found the majority of web audio projects overlooks the countless possibility. Nonetheless, I cannot say OP is wrong. I had the same complaints and rants once, but I just decided to look at the other side.
> 
> While I am at it, I would like to thank all the people in the audio working group. This is a fantastic work!
> 
> Best,
> Hongchan
> 
> 
> 
> 
> On Fri, Oct 18, 2013 at 7:01 AM, s p <sebpiq@gmail.com> wrote:
> Answer from Chris Lowis :
> 
> Hi Sebastien, Thank you very much for getting in touch, it's great to hear from computer musicians and to learn more about your requirements. I'll reply in-line here, but perhaps we could continue the discussion as a group on public-audio@w3.org?
> 
> > ry similar paradigm). It turned out to be pretty much impossible. For a simple reason is that Web Audio API really lacks objects, so I would have to implement most of them using **ScriptProcessorNodes**, and then loose all the benefits of using Web Audio API (all dsp in one ScriptProcessorNode would be faster).
> 
> Could you clarify what you mean by "objects"? Do you mean node types, and in particular one-to-one mapping to existing nodes within PD - or are you talking about a JavaScript "object" layer on top of Web Audio?
> 
> > The only stab - that I know of - at implementing some serious sound programming library on top of other WAA nodes is [waax](https://github.com/hoch/waax). But it cruelly lacks objects, and uses a couple of [ugly hacks](https://github.com/hoch/WAAX/blob/master/src/units/generators/Noise..js#L14).
> 
> I could do with a clarification of "objects" again here, just to help understand what you mean.
> 
> > I love the idea of Web Audio API. But right now I feel that it really lacks prespective, and a clear direction.
> 
> I think it's fair to say that the Web Audio API targets, at least in the initial "version 1" form common use cases on the web where previously one may have used Flash, plugins or hacks around the <audio> element. Having said that, there has been a large amount of interest from the computer music community in the API, and there is certainly a lot of interest in developing more in this direction.
> 
> > I'd really like to hear people's opinion about why they do it like that, how and why they think it can/will be used for real-life applications, because the goals stated in the draft are - in my humble opinion - completely unrealistic with the current functionalities.
> 
> Our Use Cases document gives a good idea of the kind of real-life applications we are targetting: https://dvcs.w3.org/hg/audio/raw-file/tip/reqs/Overview.html
> 
> > I am sorry to be a bit harsh, and question this project in its foundations, but I suppose that's what you get for being implied in open standards : any random angry guy out there can come and complain :)
> 
> Not at all, speaking personally I think what you are doing is fascinating and something I hope more people will attempt using the API in the future. Please keep the discussion going! Cheers, Chris
> 
> 
> 
> -- 
> Hongchan Choi
> 
> PhD Candidate, Research Assistant
> Center for Computer Research in Music and Acoustics (CCRMA)
> Stanford University
> 
> http://ccrma.stanford.edu/~hongchan

Received on Monday, 21 October 2013 13:23:07 UTC