On Fri, Oct 25, 2013 at 5:59 PM, Marcus Geelnard <mage@opera.com> wrote:
> I think that it would be good to get some real figures here. For instance,
> given the scenario of creating (& destroying) 100 workers per second (given
> a more-than-trivial music example, with more-than-trivial per-note graphs),
> with >100 workers alive concurrently, what would be the potential overhead
> compared to native nodes? (not sure if it's an extreme example - but it's
> certainly a probable use case)
>
This isn't relevant to the question of whether the processing node thread
should be a Worker or some more restricted JS execution environment.
> Complex or slow Worker features simply shouldn't be used by processing
> code. If they are used, it's unlikely to be any worse than having the
> processing code go into an infinite loop, which we have to handle anyway.
>
>
> Agree. I also have a feeling that we can handle the GC problem by
> evangelising & supporting GC-free programming practices (i.e. create
> objects & data in the setup-phase, but never create objects in the
> processing phase). E.g. the Java game programming community is used to this
> [1].
>
Java is not very relevant. In Java, threads share a single heap. JS workers
don't share heaps.
An audio processing worker would typically have a very small heap so GC
should have low pause times no matter what. I have always felt that worries
about GC are just a distraction.
Rob
--
Jtehsauts tshaei dS,o n" Wohfy Mdaon yhoaus eanuttehrotraiitny eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o Whhei csha iids teoa
stiheer :p atroa lsyazye,d 'mYaonu,r "sGients uapr,e tfaokreg iyvoeunr,
'm aotr atnod sgaoy ,h o'mGee.t" uTph eann dt hwea lmka'n? gBoutt uIp
waanndt wyeonut thoo mken.o w *
*