Re: Requirements for Web audio APIs

On Tue, May 24, 2011 at 11:08 AM, Chris Rogers <crogers@google.com> wrote:

> On Mon, May 23, 2011 at 2:54 PM, Robert O'Callahan <robert@ocallahan.org>wrote:
>
>> On Tue, May 24, 2011 at 5:42 AM, Chris Rogers <crogers@google.com> wrote:
>>
>>> Rob, I'm afraid I'll just have to strongly disagree.  The Stream API is
>>> designed specially for peer-to-peer communication.  Superficially it has
>>> sources and sinks of data like the Web Audio API, but if you really read
>>> both specs carefully their design goals are quite distinct. The Web Audio
>>> API has had a great deal of care in its design to address a variety of use
>>> cases for client side games, music, and interactive applications.  The
>>> Stream API does not replace its functionality.  Does it have the concept of
>>> intermediate processing modules, routing graphs, implicit unity gain summing
>>> junctions, fanout support, channel mix-up and mix-down, sample-accurate
>>> timing, among many other design features? -- no it does not.
>>
>>
>> Indeed. And the Web Audio API does not have support for capturing local
>> audio, recording and compressing processed audio, or sending processed audio
>> over the network (and we have only a sketch for how it might handle A/V
>> synchronization). We need fix to these deficiencies in both specs by
>> bringing them together somehow.
>>
>
> Capturing local audio is simply a matter of getting the user's permission.
>  It's straightforward to have an AudioNode for this source which can blend
> seamlessly into the current processing architecture in the Web Audio API.
>  A/V sync is clearly something to be handled at the HTMLMediaElement API
> level and you're free to propose an API and describe a plausible
> implementation if you like.  Recording/mixdown is something which can
> currently be done via a JavaScriptAudioNode and the file API.  Compressing
> processed audio could be done with an API like this:
>
> var arrayBuffer = audioBuffer.createCompressedAudioData(audioFormat);
> // Then send arrayBuffer with XMLHttpRequest
>
> Sending processed audio over the network (in the peer-to-peer
> communications case) is something still to be worked out in the details.
>  But I wouldn't assume that the Web Audio API and the Stream API can't be
> reconciled in a reasonable way.
>

I can see the attraction of duplicating Stream functionality into the Web
Audio API; it's often easier in the short term to duplicate functionality
than to reconcile or unify specs ... but in the long term we usually regret
it. (We've had this happen on the Web before with SVG vs HTML+CSS; but here
we have the opportunity to fix the problem before it escapes into the
world!)

 I still hope that you will consider the Web Audio API proposal as a good
> starting point.
>

I think that any extension of Streams with audio processing functionality
should certainly use the semantics of the Web Audio API as a starting point!
As far as I can tell, most of what's in your spec can be translated intact,
but we won't know for sure until it's implemented.

I don't think you're addressing my main point, which is that the distinction
between AudioNodes and Streams is an artifact of the way these specs have
been developed, and is undesirable, but is still fixable. But I think we're
going around in circles at this point so I should break off for now.

Rob
-- 
"Now the Bereans were of more noble character than the Thessalonians, for
they received the message with great eagerness and examined the Scriptures
every day to see if what Paul said was true." [Acts 17:11]

Received on Monday, 23 May 2011 23:54:27 UTC