[whatwg] Worker feedback

On Thu, Apr 2, 2009 at 7:18 AM, Michael Nordman <michaeln at google.com> wrote:

> I suggest that we can come up with a design that makes both of these camps
> happy and that should be our goal here.
> To that end... what if...
>
> interface Store {
>   void putItem(string name, string value);
>
>   string getItem(string name);
>   // calling getItem multiple times prior to script completion with the
> same name is gauranteed to return the same value
>   // (unless the current script had called putItem, if a different script
> had called putItem concurrently, the current script wont see that)
>
>   void transact(func transactCallback);
>   // is not guaranteed to execute if the page is unloaded prior to the lock
> being acquired
>   // is guaranteed to NOT execute if called from within onunload
>   // but... really... if you need transactional semantics, maybe you should
> be using a Database?
>
>   attribute int length;
>   // may only be accessed within a transactCallback, othewise throws an
> exception
>
>   string getItemByIndex(int i);
>   // // may only be accessed within a transactCallback, othewise throws an
> exception
> };
>

>
> document.cookie;
> // has the same safe to read multiple times semantics as store.getItem()
>
>
> So there are no locking semantics (outside of the transact method)... and
> multiple reads are not error prone.
>
> WDYT?
>

getItem stability is helpful for read-only scripts but no help for
read-write scripts. For example, outside a transaction, two scripts doing
putItem('x', getItem('x') + 1) can race and lose an increment. Even for
read-only scripts, you have the problem that reading multiple values isn't
guaranteed to give you a consistent state. So this isn't much better than
doing nothing for the default case. (Note that you can provide hen read-only
scripts are easy to optimize for full parallelism using ) Forcing iteration
to be inside a transaction isn't compatible with existing localStorage
either.

Addressing the larger context ... More than anything else, I'm channeling my
experiences at IBM Research writing race detection tools for Java programs (
http://portal.acm.org/citation.cfm?id=781528 and others), and what I learned
there about programmers with a range of skill levels grappling with shared
memory (or in our case, shared storage) concurrency. I passionately,
violently believe that Web programmers cannot and should not have to deal
with it. It's simply a matter of implementing what programmers expect: that
by default, a chunk of sequential code will do what it says without
(occasional, random) interference from outside.

I realize that this creates major implementation difficulties for parallel
browsers, which I believe will be all browsers. "Evil', "troubling" and
"onerous" are perhaps understatements... But it will be far better in the
long run to put those burdens on browser developers than to kick them
upstairs to Web developers. If it turns out that there is a compelling
performance boost that can *only* be achieved by relaxing serializability,
then I could be convinced ... but we are very far from proving that.

Rob
-- 
"He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all." [Isaiah
53:5-6]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090402/4a819cab/attachment.htm>

Received on Wednesday, 1 April 2009 15:02:14 UTC