- From: Chris Rogers <crogers@google.com>
- Date: Wed, 8 Feb 2012 14:40:30 -0800
- To: robert@ocallahan.org
- Cc: public-audio@w3.org
- Message-ID: <CA+EzO0mgSfT5u8C9T_tiX3ZO+iM2GO6uhGo51w2QiwcR8C1qbA@mail.gmail.com>
On Wed, Feb 8, 2012 at 1:50 PM, Robert O'Callahan <robert@ocallahan.org>wrote: > On Thu, Feb 9, 2012 at 10:32 AM, Chris Rogers <crogers@google.com> wrote: > >> On Wed, Feb 8, 2012 at 1:28 PM, Robert O'Callahan <robert@ocallahan.org>wrote: >> >>> No. Workers use a thread pool. >>> >> >> And how many actual threads would be created on say a quad-core or 8-core >> machine? >> > > I don't know off the top of my head, it's all handled by the existing > Workers implementation. > Ok, but you said that the ProcessedMediaStream implementation "spreads the processing for different streams across all available cores". For the sake of argument, let's say that we have a chain of 6 ProcessedMediaStream objects and they're using 6 cores on an 8-core machine (one core/thread per ProcessedMediaStream). That means that the kernel has to schedule 6 threads one after the other SERIALLY! This is with each ProcessedMediaStream thread getting scheduled (waking up), processing, then blocking/sleeping to wait for next available work, with the next stream in the chain taking the processed audio from the previous one. It's basically a pipeline of processing stages with each stage running in a different thread, and each stage dependent on the previous stage's results. There's associated kernel thread scheduling latency and overhead for each one, increasing the risk of delays and glitches quite dramatically. And, of course, every single Web Worker context could garbage collect at any time - in the worst cases two or more could gc serially with the delay being cumulative. In this example, I used 6. But what if the number is hundreds or thousands? Thousands may sound absurd, but I've created useful Web Audio code with this many using thousands of serially connected biquad allpass filters to generate interesting impulse responses. Every web worker context has non-trivial startup time and consumes non-trivial amounts of resources. They're also running in completely isolated worlds with no access to each others state or the state of the main page's DOM. > >> How are you managing synchronization between the different threads to >> minimize latency and avoid glitches given the kernel scheduling latency? >> Are these threads high-priority? >> > > Latency feels OK in my demos, but I haven't done anything to measure and > minimize it yet. > I believe the problems come up when dealing with more complex scenarios than your current demos. Chris
Received on Wednesday, 8 February 2012 22:41:02 UTC