On Fri, Oct 25, 2013 at 5:59 PM, Marcus Geelnard <mage@opera.com> wrote:
> I think that it would be good to get some real figures here. For instance,
> given the scenario of creating (& destroying) 100 workers per second (given
> a more-than-trivial music example, with more-than-trivial per-note graphs),
> with >100 workers alive concurrently, what would be the potential overhead
> compared to native nodes? (not sure if it's an extreme example - but it's
> certainly a probable use case)
>
Another thing is that it doesn't really make sense to create 1 worker per
node. You'd be much better off using a single worker for all nodes on a
given page, or at most one worker per CPU core the system has.
Rob
-- 
Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*