- From: Robert O'Callahan <robert@ocallahan.org>
- Date: Wed, 13 Aug 2014 12:32:37 +1200
- To: Chris Wilson <cwilso@google.com>
- Cc: Olivier Thereaux <olivier.thereaux@bbc.co.uk>, Audio WG <public-audio@w3.org>
- Message-ID: <CAOp6jLZ-d9Gs+hbjLT1zQEzctS6xTNEF2OSmJCW-Kk3=xJnV_w@mail.gmail.com>
On Wed, Aug 13, 2014 at 3:23 AM, Chris Wilson <cwilso@google.com> wrote: > On Mon, Aug 11, 2014 at 7:47 PM, Robert O'Callahan <robert@ocallahan.org> > wrote: > >> Furthermore, I think there are two use-cases where the current >> main-thread API is actually OK: >> -- Capturing and analyzing audio data, i.e., a pure sink. >> -- Generating audio data, i.e., a pure source. >> The current main-thread API only really sucks when you're trying to take >> input and produce output. Is there disagreement about that? >> > > Actually, in both of those cases it's still sub-optimal due to its design. > There will always be latency introduced due to the thread-passing, and > you'll always have to continually alloc and dealloc new Buffers to prevent > races. Its only benefit is simplicity in implementation (namely, that you > don't have to use postmessage to transfer parameters). > Running JS sync in the audio thread is also suboptimal: it means that JS misbehavior can cripple audio processing. I find it easy to imagine cases where I'd rather have all my JS analysis code running on the main thread to minimize the possibly of breaking my audio output, even though there's a small latency penalty. Rob -- oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo owohooo osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o oioso oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo owohooo osoaoyoso,o o‘oYooouo ofooooolo!o’o owoiololo oboeo oiono odoaonogoeoro ooofo otohoeo ofoioroeo ooofo ohoeololo.
Received on Wednesday, 13 August 2014 00:33:05 UTC