W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2011

Re: TPAC F2F and Spec Proposals

From: Anthony Bowyer-Lowe <anthony@lowbroweye.com>
Date: Tue, 18 Oct 2011 13:52:06 +0100
Message-ID: <CAMCSOPUpqMsj7yygHeGQMB2Usa=UWYsr2zH8QWw3vpQKs-1SPw@mail.gmail.com>
To: Olli@pettay.fi
Cc: public-audio@w3.org
> If we take effects out from Web Audio API, what are the main differences
> between MediaStream API and Web Audio API, and
> are there reasons to have two separate APIs to process audio?

There are few differences for the processing audio use cases such as
providing spectral visualisations of playing music, or echo cancellation of
webcam calls. The MediaStream API is very satisfactory for this.

However, for the purposes of realtime synthesis & sample playback and
videogame audio feedback, where low latency, low jitter sound
generation/triggering and direct user interaction are required then the Web
Audio API's focus upon canonical audio formats and strong timing make it far
more useful than the MediaStream API which offers none of these

Received on Tuesday, 18 October 2011 12:52:54 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:49:57 UTC