W3C home > Mailing lists > Public > whatwg@whatwg.org > September 2009

[whatwg] Application defined "locks"

From: James Robinson <jamesr@google.com>
Date: Thu, 10 Sep 2009 21:55:45 -0700
Message-ID: <ad1a0c1e0909102155g54edb3e5oc87aebb56cf0fb39@mail.gmail.com>
On Thu, Sep 10, 2009 at 7:59 PM, Darin Fisher <darin at chromium.org> wrote:

> rt oOn Thu, Sep 10, 2009 at 6:35 PM, James Robinson <jamesr at google.com>wrote:
>
>>
>>
>> On Thu, Sep 10, 2009 at 6:11 PM, Jeremy Orlow <jorlow at chromium.org>wrote:
>>
>>> On Fri, Sep 11, 2009 at 9:28 AM, Darin Fisher <darin at chromium.org>wrote:
>>>
>>>> On Thu, Sep 10, 2009 at 4:59 PM, Robert O'Callahan <
>>>> robert at ocallahan.org> wrote:
>>>>
>>>>> On Fri, Sep 11, 2009 at 9:52 AM, Darin Fisher <darin at chromium.org>wrote:
>>>>>
>>>>>> I think there are good applications for setting a long-lived lock.  We
>>>>>> can try to make it hard for people to create those locks, but then the end
>>>>>> result will be suboptimal.  They'll still find a way to build them.
>>>>>>
>>>>>
>>>>> One use case is selecting a master instance of an app. I haven't really
>>>>> been following the "global script" thread, but doesn't that address this use
>>>>> case in a more direct way?
>>>>>
>>>>
>>>> No it doesn't.  The global script would only be reachable by related
>>>> browsing contexts (similar to how window.open w/ a name works).  In a
>>>> multi-process browser, you don't want to _require_ script bindings to span
>>>> processes.
>>>>
>>>> That's why I mentioned shared workers.  Because they are isolated and
>>>> communication is via string passing, it is possible for processes in
>>>> unrelated browsing contexts to communicate with the same shared workers.
>>>>
>>>>
>>>>
>>>>>
>>>>> What other use-cases for long-lived locks are there?
>>>>>
>>>>>
>>>> This is a good question.  Most of the use cases I can imagine boil down
>>>> to a master/slave division of labor.
>>>>
>>>> For example, if I write an app that does some batch asynchronous
>>>> processing (many setTimeout calls worth), then I can imagine setting a flag
>>>> across the entire job, so that other instances of my app know not to start
>>>> another such overlapping job until I'm finished.  In this example, I'm
>>>> supposing that storage is modified at each step such that guaranteeing
>>>> storage consistency within the scope of script evaluation is not enough.
>>>>
>>>
>>> What if instead of adding locking, we added a master election mechanism?
>>>  I haven't thought it out super well, but it could be something like this:
>>>  You'd call some function like |window.electMaster(name,
>>> newMasterCallback, messageHandler)|.  The name would allow multiple groups
>>> of master/slaves to exist.  The newMasterCallback would be called any time
>>> that the master changes.  It would be passed a message port if we're a slave
>>> or null if we're the master.  messageHandler would be called for any
>>> messages.  When we're the master, it'll be passed a message port of the
>>> slave so that responses can be sent if desired.
>>>
>>> In the gmail example: when all the windows start up, they call
>>> window.electMaster.  If they're given a message port, then they'll send all
>>> messages to that master.  The master would handle the request and possibly
>>> send a response.  If a window is closed, then the UA will pick one of the
>>> slaves to become the master.  The master would handle all the state and the
>>> slaves would be lighter weight.
>>>
>>> --------------
>>>
>>> There are a couple open questions for something like this.  First of all,
>>> we might want to let windows provide a hint that they'd be a bad master.
>>>  For example, if they expected to be closed fairly soon.  (In the gmail
>>> example, a compose mail window.)
>>>
>>> We might also want to consider allowing windows to opt out of masterhood
>>> with something like |window.yieldMasterhood()|.  This would allow people to
>>> build locks upon this interface which could be good and bad.
>>>
>>> Next, we could consider adding a mechanism for the master to pickle up
>>> some amount of state and pass it on to another master.  For example, maybe
>>> the |window.yieldMasterhood()| function could take a single "state" param
>>> that would be passed into the master via the newMasterCallback function.
>>>
>>> Lastly and most importantly, we need to decide if we think shared workers
>>> are the way all of this should be done.  If so, it seems like none of this
>>> complexity is necessary.  That said, until shared workers are first class
>>> citizens in terms of what APIs they can access (cookies, LocalStorage, etc),
>>> I don't think shared workers are practical for many developers and use
>>> cases.
>>>
>>
>> What about eliminating shared memory (only one context would be allowed
>> access to cookies, localStorage, etc)?  It seems to be working out fine for
>> DOM access and is much, much easier to reason about.
>>
>> - James
>>
>>
>
> It is a good idea.  If we were to start fresh, it'd probably be the ideal
> answer.  We could say that each SharedWorker gets its own slice of
> persistent storage independent from the rest.  But this ship has sailed for
> cookies at least,
>

document.cookies is problematic but considering the many other issues with
this API it's probably not going to be the end of the world to have it be a
touch pricklier.

and database and localstorage are already shipping in UAs.
>

Is it really too late for DB and localStorage?  I'm still trying to get used
to the standards process used here but I thought the idea with UAs
implementing draft specs is that the feedback from the experience can be
used to refine the spec - a few UAs have implemented synchronous access to a
single resource from multiple threads and it appears to be problematic.
 Wouldn't that mean it's a good time to revise the problematic parts?

- James


> -Darin
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090910/bbd1cbed/attachment-0001.htm>
Received on Thursday, 10 September 2009 21:55:45 UTC

This archive was generated by hypermail 2.3.1 : Monday, 13 April 2015 23:08:52 UTC