- From: Chris Rogers <crogers@google.com>
- Date: Tue, 1 Feb 2011 16:06:14 -0800
- To: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
- Cc: "Tom White (MMA)" <lists@midi.org>, public-xg-audio@w3.org
- Message-ID: <AANLkTinTLbFNeGhyX9nMZ37zogaJuYcThUTuckFDUDZ4@mail.gmail.com>
On Tue, Feb 1, 2011 at 3:38 PM, Silvia Pfeiffer <silviapfeiffer1@gmail.com>wrote: > >> > * superset of Audio Data API functionality > >> > >> That's an unfair comparison: the Web Audio API is in no way shape or > >> form a superset of the Audio Data API functionality. For one: it > >> doesn't integrate with the Audio() API of the existing <audio> element > >> of HTML5. > > > > When I say "superset" I mean in functionality, not in the actual API > itself. > > Put in other words, any application written using the Audio Data API > should > > be possible to write with the Web Audio API. > > This is what I meant by being unfair: I'm 100% sure that everything > that is possible in the Web Audio API is possible in the Audio Data > API and vice versa. Performance may differ, but the functionality is > possible. Therefore, we should not be using this as an argument for or > against one or the other. > I think that performance is a valid argument for using one versus the other. Using your same logic one could argue that directly manipulating pixels using ImageData is sufficient to get any kind of graphics rendering that is possible in WebGL. > > > > The Web Audio API *does* interact with the <audio> tag. Please see: > > > http://chromium.googlecode.com/svn/trunk/samples/audio/specification/specification.html#MediaElementAudioSourceNode-section > > And the diagram and example code here: > > > http://chromium.googlecode.com/svn/trunk/samples/audio/specification/specification.html#DynamicLifetime-section > > To be fair, I don't have the MediaElementSourceNode implemented yet, but > I > > do believe it's an important part of the specification. > > None of this hooks into the <audio> element and the existing Audio() > function of HTML5: see > > http://www.whatwg.org/specs/web-apps/current-work/multipage/video.html#audio > . It creates its own AudioNode() and AudioSourceNode(). This is where > I would like to see an explicit integration with HTML5 and not a > replication of functionality. > I'm not sure what your point is. MediaElementSourceNode has a very direct relationship uses an <audio> element. > > > >> > >> And finally, the Web Audio API only > >> implements a certain set of audio manipulation functions in C/C++ - if > >> a developer needs more flexibility, they have to use the JavaScript > >> way here, too. > > > > This is true, but I think the set of functions will be useful in a large > set > > of applications. They can use custom JavaScript processing in special > > cases. > > > There is no doubt. I agree that these functions are useful and it will > be very important to have them in C/C++ and be able to build a filter > graph. What I'm trying to achieve is fairness in the discussion > between the two APIs and the realization that both approaches are > important to achieve. > I agree that both approaches are important, but I think you're unfairly glossing over the concrete work I've done to integrate the "processing directly in JavaScript" paradigm into the Web Audio API. Both approaches are important and useful and I have working demos illustrating both types of processing. > > > >> > >> My description of the comparison is that the Audio Data API is a > >> low-level API that allows direct access to samples and to manipulating > >> them in JavaScript with your own features. It does require either a > >> good JavaScript library or a good audio coder to achieve higher level > >> functionality such as frequency transforms or filters. But it provides > >> the sophisticated audio programmer with all the flexibility - alas > >> with the drawback of having to do their own optimisation of code to > >> achieve low latency. > > > > I agree with most of this except the part about latency and JavaScript > > optimization. There are other factors at play having to do with > threading, > > garbage collection, etc. which make latency a nagging issue no matter how > > much the JavaScript code is optimized. > > Possibly. But I don't think that's per se an argument against that > interface. Many examples have been shown with the Audio Data API where > latency did not occur or was not an issue. Just like Canvas and SVG > have advantages and disadvantages for specific situations, this is > also the case here. To me it clearly is not a matter of either or, but > a matter of getting both. > Yes, for *some* applications, latency doesn't matter that much. I also agree it's a matter of getting both. I believe that I'm offering both with my API and have working demos of each of the two approaches (native and direct JavaScript processing). > > > > > >> > >> In comparison, the Web Audio API is built like traditional audio > >> frameworks as a set of audio filters than can be composed together in > >> a graph and then kicked off to let the browser take advantage of its > >> optimised implementations of typical audio filters and achieve the > >> required low latency. By providing a by nature limited set of audio > >> filters, the audio programmer is restricted to combining these filters > >> in a means that achieves their goals. If a required filter is not > >> available, they can implement it in JavaScript and hook it into the > >> filter graph. > > > > I think that's pretty accurate, but I think in many (probably most) > > applications it will never be necessary to write custom DSP code in > > JavaScript since the provided filters have been proven in decades of use > in > > real-world audio applications to be very useful. > > > > That would be an advantage. Just like the SVG functions also help to > satisfy most graphics use cases. But not all, which is what I am > trying to point out here, too. > > >> > >> In my opinion, the difference between the Web Audio API and the Audio > >> Data API is very similar to the difference between SVG and Canvas. The > >> Web Audio API is similar to SVG in that it provides "objects" that can > >> be composed together to create a presentation. The Audio Data API is > >> similar to Canvas in that it provides pixels to manipulate. Both have > >> their use cases and community. So, similarly, I would hope that we can > >> get both audio APIs into HTML5. > > > > I've tried to incorporate the features of the Audio Data API into the Web > > Audio API with the introduction of JavaScriptAudioNode > > and MediaElementAudioSourceNode. So, in a sense I believe we already > have > > the required features which you desire. > > Working with the API I have felt it clunky and not quite integrated > with the existing HTML5 specification yet, when in contrast the Web > Audio API has extended the Audio() element with a few extra fields and > an event to make it all happen. I believe there would be a better way > to take a similar approach where we don't actually need an > AudioContext() and the Audio() element already creates an > AudioContext(). That would make the API a lot more elegant and would > remove some replication. > Well, I can't argue with your personal opinion about how the API felt to you :) But, I don't think that everything audio-related needs to be jammed into the <audio> tag. Its API was not designed from the ground up to handle these more advanced use cases. There are a whole pantheon of graphics-related DOM elements and APIs serving different purposes. They don't all have to be intimately involved with an <img> tag. Similarly, I don't believe everything audio-related needs to be pushed into the <audio> tag which was, after all, designed explicitly for audio streaming. Believe me, I've looked carefully at the <audio> API and believe I've achieved a reasonable level of integration with it through the MediaElementSourceNode. It's practical and makes sense to me. I think this is just one area where we might disagree. Chris
Received on Wednesday, 2 February 2011 00:06:45 UTC