- From: Grant Galitz <grantgalitz@gmail.com>
- Date: Wed, 20 Jun 2012 12:49:53 -0400
- To: public-audio@w3.org
- Message-ID: <CAD8zUBYCBNy2_z++pP2xpMECCUx5u15ALa2Xttd91aT8treTSw@mail.gmail.com>
Say you want to keep a latency of 100 ms in your audio app. If an API like the Web Audio API or the custom Adobe Flash bridge decide they need more than 100 ms of latency, you can fill in with silence when you decide not to push out more samples. With the Mozilla Audio API, Firefox on multiple OSes will decide not to play your audio stream if it's not "filled up" to the buffering amount it likes. Instead, you won't be able to correctly discern that latency Firefox wants and as a result this will corrupt your own app's internal buffering, as a function like mozCurrentSampleOffset will not function correctly as you'd expect, due to this hidden buffering. Therefore, in order to fix this problem, firefox needs to use a callback based API instead, as the audio library interface will still be able to determine its buffering position (Because it will keep its own buffer directly instead of passing to the browser for mgmt). Take this as an example for a callback API: 1) Browser requests 500 ms of latency via multiple callbacks one after the other. 2) Web App only wants to have 100 ms of latency. 3) 400 ms of silence inserted and 100 ms of audio data inserted. 4) Now that the latency has been "filled," the callback API will request in proper intervals. The buffer management for a callback api has to be done inside JS, so the web app will always know how many samples it has left. The problem with moz audio here is that moz audio was responsible for buffer management and gave back incomplete information to the web app so it had issues.
Received on Wednesday, 20 June 2012 16:50:23 UTC