W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: [web-audio-api] Need a way to determine AudioContext time of currently audible signal (#12)

From: Olivier Thereaux <notifications@github.com>
Date: Wed, 11 Sep 2013 07:29:22 -0700
To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Message-ID: <WebAudio/web-audio-api/issues/12/24244076@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=20698#1) by Chris Wilson on W3C Bugzilla. Tue, 02 Apr 2013 14:30:39 GMT

Can we clearly delineate?  I'm not positive I understand what "latency discovery" is, because there's one bit of information (the average processing block size) that might be interesting, but I intended this issue to cover the explicit "I need to synchronize between the audio time clock and the performance clock at a reasonably high precision - that is, for example:

1) I want to be playing a looped sequence through Web Audio; when I get a timestamped MIDI message (or keypress, for that matter), I want to be able to record it and play that sequence back at the right time.

2) I want to be able to play back a sequence of combined MIDI messages and Web Audio, and have them synchronized to a sub-latency level (given the latency today on Linux and even Windows, this is a requirement).  Even if my latency of Web Audio playback is 20ms, I should be able to pre-schedule MIDI and audio events to occur within a millisecond or so of each other.

Now, there's a level of planning for which knowing the "average latency" - related to processing block size, I imagine - would be interesting (I could use that to pick a latency in my scheduler, for example); but that's not the same thing.  Perhaps these should be solved together, but I don't want the former to be dropped in favor of the latter.

Reply to this email directly or view it on GitHub:
Received on Wednesday, 11 September 2013 14:32:00 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:24 UTC