Re: ScriptProcessorNode off the main thread (Was: Questioning the current direction of the Web Audio API)

> For instance, given the scenario of creating (& destroying) 100 workers
per second

That's also one topic that has only been indirectly mentioned. Right now
there's no mechanism to release a ScriptProcessorNode, and I think this is
an important thing to have.

> with >100 workers alive concurrently, what would be the potential
overhead compared to native nodes?

An extreme example would be more like 500 nodes alive concurrently.
Would a benchmark [N workers + ScriptProcessorNode] VS [native nodes] be of
any relevance to check the overhead in using N workers?

2013/10/25 Marcus Geelnard <>

>  2013-10-25 15:40, Robert O'Callahan skrev:
> On Fri, Oct 25, 2013 at 3:17 PM, Joseph Berkovitz <>wrote:
>> Here are some of the issues with off-the-shelf workers that I see:
>>  - Shared workers, while not having a per-node overhead problem, seem
>> too global in scope. It feels as though we want different working storage
>> for different nodes, even if they share the same script. Global
>> communication between nodes would probably lead to bugs, not advantages.
>>  - If there is one Dedicated worker per node, it's likely that Dedicated
>> workers will have to be created in great quantity since Web Audio in many
>> use cases is a very node-intensive system. The setup overhead for dedicated
>> workers may be too high.
>>  - Dedicated workers imply postMessage-style communication via a
>> MessagePort, which suggests unnecessary communication overhead since we
>> don't need to allow passing of arbitrary data structures between node
>> workers and the rest of the environment. We can focus on audio events and
>> their results.
>>  - Going forward, Web Audio workers do not even need to communicate with
>> arbitrary other pieces of the browser environment. They only need to
>> communicate with the machinery running the audio graph.
>>  So I suggest we at least consider a new flavor of Worker that is
>> tailored for WebAudio, or else move to a distinct object altogether.
>  The question is whether building an alternative off-main-thread JS
> execution context could offer significant advantages over building on
> DedicatedWorker, such as lower overhead if you have a lot of them.
> Personally I don't see any reason to believe that there would be much
> benefit.
> I think that it would be good to get some real figures here. For instance,
> given the scenario of creating (& destroying) 100 workers per second (given
> a more-than-trivial music example, with more-than-trivial per-note graphs),
> with >100 workers alive concurrently, what would be the potential overhead
> compared to native nodes? (not sure if it's an extreme example - but it's
> certainly a probable use case)
> Ideally, there shouldn't be much difference to native nodes, but in
> reality there would be a heavier set-up time and a memory usage overhead.
> But how much?
>    Complex or slow Worker features simply shouldn't be used by processing
> code. If they are used, it's unlikely to be any worse than having the
> processing code go into an infinite loop, which we have to handle anyway.
> Agree. I also have a feeling that we can handle the GC problem by
> evangelising & supporting GC-free programming practices (i.e. create
> objects & data in the setup-phase, but never create objects in the
> processing phase). E.g. the Java game programming community is used to this
> [1].
> /Marcus
> [1]
>  Rob
>  --
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w  *
> *
> --
> Marcus Geelnard
> Technical Lead, Mobile Infrastructure
> Opera Software

*Sébastien Piquemal
** *-----* @sebpiq*
 -----* **

Received on Friday, 25 October 2013 16:13:35 UTC