W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Web Audio API spec review

From: Chris Wilson <cwilso@google.com>
Date: Fri, 18 May 2012 10:10:49 -0700
Message-ID: <CAJK2wqWY1Ub8zduxJz=hvmsE4mZ-CkKLOhVF0AcJ8hDV7J-4kw@mail.gmail.com>
To: olli@pettay.fi
Cc: robert@ocallahan.org, public-audio@w3.org, Chris Rogers <crogers@google.com>, Philip Jägenstedt <philipj@opera.com>
For myself, I would answer "because I don't always see the value in taking
that overhead."  Sometimes, it makes sense - but a lot of the time, it's
just programming overhead.

This is very apparent to me when discussing the potential MIDI API we'd
like to design.  From a conversation we had on this list months ago, Rob
suggested that (I'm paraphrasing) if you treated MIDI as a Media Stream,
it's quite apparent how to implement a software synthesizer - it's simply a
stream that converts a MIDI stream to an audio stream.  I understood that,
but that's only one of the use cases - and centering all the MIDI API
around Streams seemed like a steep hill to climb for no value to someone
who just wants to get note inputs from a controller (e.g. in a piano lesson
application) or send outputs to control DMX lighting in a stage setup.

In much the same way that the MSP proposal treated "the effects graph" as a
box (e.g. in example 11
here<https://dvcs.w3.org/hg/audio/raw-file/tip/streams/StreamProcessing.html#examples>),
I think I see streams as a management system above what I'd usually want to
be doing. As Chris said, we can see the value of thinking in streams in RTC
and capture cases, and that's why they've been
described<https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/webrtc-integration.html>;
but I'm less convinced that a game audio system (or a drum machine app)
should necessarily need to start with streams as the foundation, and I'd
point out how much clearer some of the samples in that document are without
streams overhead (e.g. examples 9,10 and 11). I certainly don't see a
reason to prevent an implementation from thinking entirely in streams, but
I just don't really see the value in forcing app developers to think that
way every time they want to trigger a sound (and I'm also a little
uncomfortable using the <audio> element as the basis for all sounds, as
it's pretty heavyweight for short samples).

-C
On Fri, May 18, 2012 at 2:29 AM, Olli Pettay <Olli.Pettay@helsinki.fi>wrote:

> On 05/16/2012 01:12 PM, Robert O'Callahan wrote:
>
>  On Wed, May 16, 2012 at 9:42 PM, Olli Pettay <Olli.Pettay@helsinki.fi<mailto:
>> Olli.Pettay@helsinki.**fi <Olli.Pettay@helsinki.fi>>> wrote:
>>
>>    Let me re-phrase this to a question.
>>    Are there any technical reasons why an API very similar to WebAudioAPI
>> couldn't be
>>    built on top of an API very similar to MSP?
>>
>>
>> I think not.
>>
>> We'll try to implement it that way.
>>
>
>
> So why not define the WebAudioAPI spec on top of the MSP spec?
>
>
>
> -Olli
>
>
>
>
>> Rob
>> --
>> “You have heard that it was said, ‘Love your neighbor and hate your
>> enemy.’ But I tell you, love your enemies and pray for those who persecute
>> you,
>> that you may be children of your Father in heaven. ... If you love those
>> who love you, what reward will you get? Are not even the tax collectors
>> doing
>> that? And if you greet only your own people, what are you doing more than
>> others?" [Matthew 5:43-47]
>>
>>
>
>
>
Received on Friday, 18 May 2012 17:11:20 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 18 May 2012 17:11:21 GMT