- From: <bugzilla@jessica.w3.org>
- Date: Tue, 02 Apr 2013 14:30:39 +0000
- To: public-audio@w3.org
https://www.w3.org/Bugs/Public/show_bug.cgi?id=20698
Chris Wilson <cwilso@gmail.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |cwilso@gmail.com
--- Comment #2 from Chris Wilson <cwilso@gmail.com> ---
Can we clearly delineate? I'm not positive I understand what "latency
discovery" is, because there's one bit of information (the average processing
block size) that might be interesting, but I intended this issue to cover the
explicit "I need to synchronize between the audio time clock and the
performance clock at a reasonably high precision - that is, for example:
1) I want to be playing a looped sequence through Web Audio; when I get a
timestamped MIDI message (or keypress, for that matter), I want to be able to
record it and play that sequence back at the right time.
2) I want to be able to play back a sequence of combined MIDI messages and Web
Audio, and have them synchronized to a sub-latency level (given the latency
today on Linux and even Windows, this is a requirement). Even if my latency of
Web Audio playback is 20ms, I should be able to pre-schedule MIDI and audio
events to occur within a millisecond or so of each other.
Now, there's a level of planning for which knowing the "average latency" -
related to processing block size, I imagine - would be interesting (I could use
that to pick a latency in my scheduler, for example); but that's not the same
thing. Perhaps these should be solved together, but I don't want the former to
be dropped in favor of the latter.
--
You are receiving this mail because:
You are the QA Contact for the bug.
Received on Tuesday, 2 April 2013 14:30:45 UTC