W3C home > Mailing lists > Public > whatwg@whatwg.org > March 2009

[whatwg] Worker feedback

From: Drew Wilson <atwilson@google.com>
Date: Tue, 31 Mar 2009 21:57:21 -0700
Message-ID: <f965ae410903312157h441f443bre4dbfd17e2eac9f4@mail.gmail.com>
On Tue, Mar 31, 2009 at 6:25 PM, Robert O'Callahan <robert at ocallahan.org>wrote:
>
>
> We know for sure it's possible to write scripts with racy behaviour, so the
> question is whether this ever occurs in the wild. You're claiming it does
> not, and I'm questioning whether you really have that data.
>

I'm not claiming it *never* occurs, because in the vasty depths of the
internet I suspect *anything* can be found. Also, my rhetorical powers
aren't up to the task of constructing a negative proof :)


> We don't know how much (if any) performance must be sacrificed, because
> no-one's tried to implement parallel cookie access with serializability
> guarantees. So I don't think we can say what the correct tradeoff is.
>

The spec as proposed states that script that accesses cookies cannot operate
in parallel with network access on those same domains. The performance
impact of something like this is pretty clear, IMO - we don't need to
implement it and measure it to know it exists and in some situations could
be significant.


> You mean IE and Chrome's implementation, I presume, since Firefox and
> Safari do not allow cookies to be modified during script execution AFAIK.


I think the old spec language captured the intent quite well -
document.cookie is a snapshot of an inherently racy state, which is the set
of cookies that would be sent with a network call at that precise instant.
Due to varying browser implementations, that state may be less racy on some
browsers than on others, but the general model was one without guarantees.

I understand the philosophy behind serializing access to shared state, and I
agree with it in general. But I think we need to make an exception in the
case of document.cookie based on current usage and expected performance
impact (since it impacts our ability to parallelize network access and
script execution).

In this case, the burden of proof has to fall on those trying to change the
spec - I think we need a compelling real-world argument why we should be
making our browsers slower. The pragmatic part of my brain suggests that
we're trying to solve a problem that exists in theory, but which doesn't
actually happen in practice.

Anyhow, at this point I think we're just going around in circles about this
- I'm not sure that either of us are going to convince the other, so I'll
shut up now and let others have the last word :)

-atw
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090331/7901fd64/attachment.htm>
Received on Tuesday, 31 March 2009 21:57:21 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 30 January 2013 18:47:49 GMT