Re: Web Audio API spec review

Perhaps we are implying different things from "defined on top of
MSP/designed to work on top of MSP."  The latter of these I would translate
as "designed to work naturally together with MSP in scenarios that use
MSP".  I think that absolutely is a goal, in fact it's the motivation
behind the WebRTC integration with MS methods on AudioContext - i.e., I
think that's being done, and in the samples Chris converted from Robert's
samples, it seems pretty straightforward and integrated with me.  Am I
missing some scenario, or misunderstanding how to complexifies something?

"Built on top of" implies a layered approach that, TBH, I don't think
applies - or if it does, it's unclear that MSP is really underneath.  I do
believe they should work very well together, as I believe Chris does also.
 I don't think that every channel of data should necessarily be forced into
the Media Stream object (e.g. MIDI).  To answer your "what overhead"
question - the programming overhead of having to understand and force all
scenarios into Media Streams, when they don't fit that paradigm (again,
e.g. MIDI).

-C


On Sun, May 20, 2012 at 9:26 AM, Olli Pettay <Olli.Pettay@helsinki.fi>wrote:

> On 05/18/2012 08:10 PM, Chris Wilson wrote:
>
>> For myself, I would answer "because I don't always see the value in
>> taking that overhead."
>>
> What overhead?
>
>
>  Sometimes, it makes sense - but a lot of the time, it's
>> just programming overhead.
>>
> It depends on the API whether there is any overhead.
>
> I'm just asking why WebAudio API couldn't be designed to work on top of
> MSP.
> The actual API for JS developers could look very much the same it is now
> (well, except JS generated
> audio data which we really should push to workers).
> I believe it would ease implementing the specs (including also other specs
> for audio/streams) if all the
> specs would 'speak the same language'.
>
>
>
> -Olli
>
>
>
>> This is very apparent to me when discussing the potential MIDI API we'd
>> like to design.  From a conversation we had on this list months ago, Rob
>> suggested that (I'm paraphrasing) if you treated MIDI as a Media Stream,
>> it's quite apparent how to implement a software synthesizer - it's simply a
>> stream that converts a MIDI stream to an audio stream.  I understood
>> that, but that's only one of the use cases - and centering all the MIDI API
>> around Streams seemed like a steep hill to climb for no value to someone
>> who just wants to get note inputs from a controller (e.g. in a piano lesson
>> application) or send outputs to control DMX lighting in a stage setup.
>>
>> In much the same way that the MSP proposal treated "the effects graph" as
>> a box (e.g. in example 11 here
>> <https://dvcs.w3.org/hg/audio/**raw-file/tip/streams/**
>> StreamProcessing.html#examples<https://dvcs.w3.org/hg/audio/raw-file/tip/streams/StreamProcessing.html#examples>
>> **>), I think I see streams as a management system above what I'd
>>
>> usually want to be doing. As Chris said, we can see the value of thinking
>> in streams in RTC and capture cases, and that's why they've been described
>> <https://dvcs.w3.org/hg/audio/**raw-file/tip/webaudio/webrtc-**
>> integration.html<https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/webrtc-integration.html>>;
>> but I'm less convinced that a game audio system (or a drum machine app)
>>
>> should necessarily need to start with streams as the foundation, and I'd
>> point out how much clearer some of the samples in that document are without
>> streams overhead (e.g. examples 9,10 and 11). I certainly don't see a
>> reason to prevent an implementation from thinking entirely in streams, but I
>> just don't really see the value in forcing app developers to think that
>> way every time they want to trigger a sound (and I'm also a little
>> uncomfortable using the <audio> element as the basis for all sounds, as
>> it's pretty heavyweight for short samples).
>>
>> -C
>> On Fri, May 18, 2012 at 2:29 AM, Olli Pettay <Olli.Pettay@helsinki.fi<mailto:
>> Olli.Pettay@helsinki.**fi <Olli.Pettay@helsinki.fi>>> wrote:
>>
>>    On 05/16/2012 01:12 PM, Robert O'Callahan wrote:
>>
>>        On Wed, May 16, 2012 at 9:42 PM, Olli Pettay <
>> Olli.Pettay@helsinki.fi <mailto:Olli.Pettay@helsinki.**fi<Olli.Pettay@helsinki.fi>>
>> <mailto:Olli.Pettay@helsinki._**_fi
>>
>>        <mailto:Olli.Pettay@helsinki.**fi <Olli.Pettay@helsinki.fi>>>>
>> wrote:
>>
>>            Let me re-phrase this to a question.
>>            Are there any technical reasons why an API very similar to
>> WebAudioAPI couldn't be
>>            built on top of an API very similar to MSP?
>>
>>
>>        I think not.
>>
>>        We'll try to implement it that way.
>>
>>
>>
>>    So why not define the WebAudioAPI spec on top of the MSP spec?
>>
>>
>>
>>    -Olli
>>
>>
>>
>>
>>        Rob
>>        --
>>        “You have heard that it was said, ‘Love your neighbor and hate
>> your enemy.’ But I tell you, love your enemies and pray for those who
>> persecute
>>        you,
>>        that you may be children of your Father in heaven. ... If you love
>> those who love you, what reward will you get? Are not even the tax
>>        collectors doing
>>        that? And if you greet only your own people, what are you doing
>> more than others?" [Matthew 5:43-47]
>>
>>
>>
>>
>>
>>
>
>

Received on Monday, 21 May 2012 00:28:52 UTC