W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Resolution to republish MSP as a note

From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Date: Sun, 12 Aug 2012 01:52:07 +0300
Message-ID: <CAJhzemXyx43peEdxZJ+ffpri81eYwtgTp+fQq_QqWoeq9PxxCg@mail.gmail.com>
To: Chris Rogers <crogers@google.com>
Cc: public-audio@w3.org
Hey Chris,

I'm not saying it should be a substitute for improving and refining the
> specification.  Of course we need to continue to work on that, and I'm
> pleased to see the constructive feedback from this group so far.  We've
> already incorporated some of those improvements recently and have just
> published the 3rd public working draft.

I'm sorry, my response was a bit uncalled for. :) Obviously I appreciate
the fact that the one implementation we currently have is open source, it
makes a world of difference. Especially because it's written by someone
with as long history in DSP as you have. In fact, I've found the source
quite interesting and helpful even though I'm not (probably) going to
implement the API myself any time.

> What I meant was that the source-code in WebKit is strong proof that the
> specification actually works in real implementations, addressing the use
> cases for real-world audio applications.  Many developers have used the API
> and have been able to create a wide range of games and applications
> already.  Thus having the source-code available is a very good thing,
> because it provides us with the confidence that we know how to create the
> system that we're designing in the specification, and provides guidance for
> improving the specification when put into the hands of real-world
> developers.  No other alternative APIs or approach so far has come even
> close to going through this critical process.

Yes, you're right.

>> Using emscripten would to port it would probably be a bad idea as for now
>> it doesn't take advantage of the DSP API. Using the DSP API to write an
>> audio framework would mainly be just writing an abstraction wrapper, the
>> important functionality is already there, the library can just provide a
>> meaningful way to use it.
>>> I'm just trying to provide a simple-to-use, high-level audio API where
>>> audio developers don't have to jump through hoops and where the JS calls
>>> can be combined with other common JavaScript APIs that are available in the
>>> main thread.  Is that such a bad thing?
>> No, it's not such a bad thing, absolutely not. I just think that it isn't
>> in its right place as a web standard proposal, because it doesn't fit the
>> big picture of the web as a whole, as it's not built on existing features
>> nor does it really give any room for future web standards to build on it,
>> it's just a separate entity that provides a few joins to communicate with
>> the platform. Now that's OK, were we designing a user library, indeed I'd
>> much rather see it as a library/framework that was built on a more
>> reusable&modular lower-level API.
> Saying something like "doesn't fit the big picture of the web as a whole"
> is such a matter of opinion.  I would have to disagree and say that people
> are using the Web Audio API today along with: HTMLMediaElement,
> MediaStream, canvas2D, WebGL, WebSockets, CSS animations, XHR2, File API,
> the DOM, Gamepad API, etc. etc.

Perhaps. But being used together with something doesn't really imply
fitting together. I could use libav and libcurl together, but that doesn't
mean they fit together or are designed on the same foundations. But maybe
this can be fixed. What bothers me most is how poorly the API is extensible
/ reusable (for now, at least) from library writers' perspective. The API
tries to be too much fit-everywhere API, which to me is just a pretty bad
idea for a higher-level API, because that seriously calls for bloat
usually. When people have a high-level API, they expect it to have
everything ready, not having to make dozens of nodes for a vocoder.
Obviously I have no evidence of this, other than what I've heard about
ffmpeg. But we have issues for most of those problems already.

Another thing, however, is how it's doing everything in it's own way. For
example not using EventTarget. Or not being based on MediaStreams. The
implications of that are actually quite restricting, if you think in terms
of how you can interact with other APIs on the web. For example, if all the
graphs were streams, they could be piped as media streams to a peer over
WebRTC. Currently the graph restricts you to real time audio. If they were
streams, I think it would make it easier to design DAW plugin systems as
well as you could use the Web Audio API there as well, and the stream would
pass through the graph in the plugin and the plugin doesn't need to know
where it's going or where it's coming from.

Actually, I think this discussion was worth having after all. :) I got
quite a few ideas from it.

Received on Saturday, 11 August 2012 22:52:35 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:01 UTC