Re: Is Web Audio a realtime API?

On Wed, Jan 14, 2015 at 7:52 PM, Raymond Toy <rtoy@google.com> wrote:

>
>
> On Wed, Jan 14, 2015 at 9:02 AM, Chris Lilley <chris@w3.org> wrote:
>
>> Hello Oleg,
>>
>> Wednesday, January 14, 2015, 3:11:56 PM, you wrote:
>>
>> > Specification lists performance and latency considerations, but
>> > does not explicitly state if API is realtime or not.
>>
>> That depends on your definition of realtime.
>>
>> On the one hand, it concerns audio which is generated "in realtime"
>> rather than previously recorded audio which is merely played back.
>>
>> On the other hand, it will almost always be executed on a
>> non-realtime, multitasking OS rather than a RTOS. And audio samples
>> are processed in blocks rather than as individual samples, so
>> sample-accurate sync is hard (128 samples at 44.1k is 2.9ms).
>>
>
> I believe the api does give you sample-accurate sync, even if the samples
> are processed in blocks.
>

Yes, in the sense that you can *schedule* sound to be started at a specific
sample, in the future (maybe 10ms or so).

Implementations as of today try to reach the lowest audio output latency
available on a consumer system (no ASIO/other specialized API, no exclusive
access, although I seem to recall seeing some code about that in the
Chromium tree).

Implementation also try to use higher-priority thread where possible (OSX,
Windows/WASAPI, PulseAudio and Android, both OpenSL and regular AudioTrack,
are platforms where you can have access to a higher priority threads.

We have discussed having an API to get the latency, to be able to have very
precise audio/video synchronization (for games and other interactive media,
as you mention). You can follow along and jump in the discussion at [0].

Cheers,
Paul.

[0]: https://github.com/WebAudio/web-audio-api/issues/12

Received on Thursday, 15 January 2015 10:55:59 UTC