W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

Re: oscillators and the Web Audio API

From: Chris Rogers <crogers@google.com>
Date: Wed, 8 Feb 2012 13:21:03 -0800
Message-ID: <CA+EzO0kKdJFYzNy_sH1CH-iAqXhf2rO6SLUA+K7pa+Gq3C5PCw@mail.gmail.com>
To: robert@ocallahan.org
Cc: Alistair MacDonald <al@signedon.com>, Joe Berkovitz <joe@noteflight.com>, public-audio@w3.org, philburk@mobileer.com
On Wed, Feb 8, 2012 at 1:13 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Thu, Feb 9, 2012 at 8:33 AM, Alistair MacDonald <al@signedon.com>wrote:
>> I think going back-and-forth between the Native Graph and the
>> JavaScript Processing Nodes is going to be quite expensive,
> I think it's better to try things out and measure than to make assumptions
> about performance. The relative cost of those transitions will depend very
> much on the implementation, and decisions about buffer sizing etc.
> Also, devices have multiple CPU cores now. My ProcessedMediaStream
> implementation spreads the processing for different streams across all
> available cores. It's especially hard to predict performance in that kind
> of configuration.

How so?  If I understand correctly, your ProcessedMediaStream object
roughly corresponds to an AudioNode in my API, except that the processing
always occurs in a worker thread with the actual DSP happening in
JavaScript.  So, for example, if you have a processing graph with sixty
different nodes (not an unreasonable assumption) does that mean there will
be sixty worker threads which are created?


> Rob
> --
> "If we claim to be without sin, we deceive ourselves and the truth is not
> in us. If we confess our sins, he is faithful and just and will forgive us
> our sins and purify us from all unrighteousness. If we claim we have not
> sinned, we make him out to be a liar and his word is not in us." [1 John
> 1:8-10]
Received on Wednesday, 8 February 2012 21:21:34 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:49:57 UTC