Re: Sync API for workers

Le 04/09/2012 18:46, Glenn Maynard a écrit :
> On Tue, Sep 4, 2012 at 10:32 AM, David Bruant <bruant.d@gmail.com
> <mailto:bruant.d@gmail.com>> wrote:
>
>     Cognitive load is the only one mentioned so far. It is a serious
>     issue since for the foreseeable future, only human beings will be
>     writing code.
>
>     However, as said, there are solutions to reduce this load.
>     I wish to share an experience.
>     Back in April, I gave a JavaScript/jQuery training to people who
>     knew programming, but didn't know JavaScript. I made the decision
>     to teach promises right away (jQuery has them built-in, so that's
>     easy). It seems that it helped a lot understanding async programming.
>     The cognitive load has its solutions.
>
>
> (Understanding asynchronous programming isn't really the issue.  I'm
> sure everyone in this discussion has an intuitive grasp of that.)
>
> Those are attempts at making asynchronous code easier to write;
> they're not substitutes for synchronous code.  They still result in
> code with less understandable, well-scoped state.
I'm sorry, but I have to disagree. Have you ever used promises in a
large-scale project?
I've been amazed to discover that promise-based API are ridiculously
much easier to refactor than callback-based API. Obviously, refactoring
necessitates well-scoped state. I can't show the commit I have in mind,
because it's in closed-source software, but really, a promise-based API
isn't less understandable and less well-scoped. That statement is at the
opposite direction of my experience these last 8 months.


>
>     This is a very interesting example and I realize that I have used
>     "blocking" and "sync" interchangeably by mistake. I'm against
>     blocking, but not sync.
>     What I'm fundamentally (to answer what you said above) against is
>     the idea of blocking a computation unit (like a worker) that does
>     nothing but idly waits (for IO or a message for instance). It
>     seems that proposals so far make the worker wait for a message and
>     do nothing meanwhile and that's a pure waste of resources. A
>     worker has been paid for (memory, init time...) and it's waiting
>     while it could be doing other things.
>     The current JS event loop run-to-completion model prevents that
>     waste by design.
>
>
> Workers broke away from requiring the "do a bit of work then keep
> returning to the event loop" model of the UI thread from the start. 
> This is no different than the APIs we already have.  To take an
> earlier example:
>
> var worker = createDictionaryWorker();
> worker.postMessage("elephant");
> var definition = getMessage(worker); // wait for the answer
>
> This is no different than a sync XHR or IndexedDB call to do the same
> thing:
>
> var xhr = new XMLHttpRequest();
> xhr.open("GET", "/dictionary?elephant", false); // sync
> xhr.send();
> var definition = xhr.responseText;
>
> It simply allows workers, not just native code, to implement these
> APIs.  That's a natural step.
I understand and agree, but you're not addressing the problem of the
resource waste I've mentionned above.
Even if you're doing sync xhr in a worker, you're wasting the worker
time, because it could be computing other things while waiting for the
network to respond. That problem was obvious in the main thread because
it was resulting in poor user experience, but the problem still holds
with workers.
What do you do if your worker is busy idling while waiting for network,
but still need some other work to be done? Open another worker? And when
this one is idling and you need work done? Open another worker?

To oppose both things in the same sentence, is the readability worth the
waste of resources?
That's a genuine question. My experience with Node.js (which also
provides sync methods for IO) is that for small scripts, sync methods
are more convenient that callbacks or even promises. But arguably, for
small scripts, readability isn't that big of a concern by nature of a
small script.

David

Received on Tuesday, 4 September 2012 17:50:29 UTC