W3C home > Mailing lists > Public > whatwg@whatwg.org > September 2009

[whatwg] Application defined "locks"

From: Darin Fisher <darin@chromium.org>
Date: Thu, 10 Sep 2009 19:52:53 -0700
Message-ID: <bd8f24d20909101952n2a9f0cc8y143803c748f1c3cf@mail.gmail.com>
On Thu, Sep 10, 2009 at 5:28 PM, Darin Fisher <darin at chromium.org> wrote:

> On Thu, Sep 10, 2009 at 4:59 PM, Robert O'Callahan <robert at ocallahan.org>wrote:
>
>> On Fri, Sep 11, 2009 at 9:52 AM, Darin Fisher <darin at chromium.org> wrote:
>>
>>> I think there are good applications for setting a long-lived lock.  We
>>> can try to make it hard for people to create those locks, but then the end
>>> result will be suboptimal.  They'll still find a way to build them.
>>>
>>
>> One use case is selecting a master instance of an app. I haven't really
>> been following the "global script" thread, but doesn't that address this use
>> case in a more direct way?
>>
>
> No it doesn't.  The global script would only be reachable by related
> browsing contexts (similar to how window.open w/ a name works).  In a
> multi-process browser, you don't want to _require_ script bindings to span
> processes.
>
> That's why I mentioned shared workers.  Because they are isolated and
> communication is via string passing, it is possible for processes in
> unrelated browsing contexts to communicate with the same shared workers.
>
>
>
>>
>> What other use-cases for long-lived locks are there?
>>
>>
> This is a good question.  Most of the use cases I can imagine boil down to
> a master/slave division of labor.
>
> For example, if I write an app that does some batch asynchronous processing
> (many setTimeout calls worth), then I can imagine setting a flag across the
> entire job, so that other instances of my app know not to start another such
> overlapping job until I'm finished.  In this example, I'm supposing that
> storage is modified at each step such that guaranteeing storage consistency
> within the scope of script evaluation is not enough.
>
> -Darin
>


Also, the other motivating factor for me is access to LocalStorage from
workers.  (I know it has been removed from the spec, but that is
unfortunate, no?)

By definition, workers are designed to be long lived, possibly doing long
stretches of computation, and being able to intermix reads and writes to
storage during that stretch of computation would be nice.

Moreover, it would be nice if a worker in domain A could effectively "lock"
part of the storage so that the portion of the app running on the main
thread could continue accessing the other parts of storage associated with
domain A.  The implicit storage mutex doesn't support this use case very
well.  You end up having to call the getStorageUpdates function periodically
(releasing the lock in the middle of computation!!).  That kind of thing is
really scary and hard to get right.  I cringe whenever I see someone
unlocking, calling out to foreign code, and then re-acquiring the lock.
 Why?  because it means that existing variables, stack based or otherwise,
that were previously consistent may have become inconsistent with global
data in storage due to having released the lock.  getStorageUpdates is
dangerous.  it is a big hammer that doesn't really fit the bill.

The alternative to getStorageUpdates in this case is to create another
domain on which to run the background worker just so that you can have an
independent slice of storage.  That seems really lame to me.  Why should
domain A have to jump through such hoops?

-Darin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090910/0e06fd65/attachment.htm>
Received on Thursday, 10 September 2009 19:52:53 UTC

This archive was generated by hypermail 2.3.1 : Monday, 13 April 2015 23:08:52 UTC