W3C home > Mailing lists > Public > public-web-security@w3.org > April 2011

Re: policy-uri is slow

From: Aryeh Gregor <Simetrical+w3c@gmail.com>
Date: Thu, 21 Apr 2011 15:11:08 -0400
Message-ID: <BANLkTi=+8ay2qMaeV_k1V2XSWPTOVEjTTw@mail.gmail.com>
To: Mark Nottingham <mnot@mnot.net>
Cc: Adam Barth <w3c@adambarth.com>, public-web-security@w3.org
On Mon, Apr 18, 2011 at 11:38 PM, Mark Nottingham <mnot@mnot.net> wrote:
> You mean congestion window?

Probably.  I haven't studied how TCP works in depth, only superficially.

> It's going to take a fair amount of time to get larger congestion windows rolled out, and that work is still somewhat controversial. Adding bytes to every response assuming that it will just work out -- in all cases (e.g., mobile) -- isn't a good assumption to make.

Agreed, but allowing authors to easily add a full round-trip delay to
every hit is also a bad idea.

> You're basing that on common browser cache behaviour in 2007. Lots has changed and will change.

Correct.  It's not great, but it's the best data I'm aware of.

> Rather than arguing the details in circles, it seems that the crux here is a) how common CSP (and other policy that leverages CSP) will be, and b) how likely it is that the CSP mechanism's vocabulary will grow over time. If we can get a sense of the answers to those questions, the right thing to do should become clear.
> If it's to be commonly deployed (e.g., as much as favicon), it's worth considering pre-emtively accessing a well-known URI as the initial request goes out. If it's going to be uncommon (i.e., most sites won't have CSP), it may not be worthwhile, but having the policy-uri mechanism (or similar) is still worthwhile, as some Web sites may want to use it, especially as the vocabulary grows.

I agree that if it's as commonly deployed as favicon, then we can use
a well-known URL and that would solve the problem.  I don't think this
is a plausible scenario, since favicons are readily visible and so is
of interest to all authors, while CSP is invisible and solves problems
that most authors don't understand or care much about (i.e.,
security).  Even if it's plausible, it's unwise to assume it.

I agree that if the CSP vocabulary grows, some way of avoiding
repeated inline data would be valuable.  It would be valuable even
now, for sites that want to whitelist many domains for whatever
reason.  However, I don't agree that many sites will actually want an
extra round-trip delay on cold cache to save bytes on repeated
requests, although a few might.  I also don't agree that it's a good
idea to add a feature that can be easily misused by clueless authors
to add an extra round-trip delay on *every* request.

> That seems really convoluted and non-optimal. If it's a normal URI with a normal response, I can reuse normal HTTP caching for it (e.g., reverse proxies, ISP proxies, mobile proxies, browser caches etc.). This mechanism reinvents the wheel.

That mechanism is complicated, yes, and it would require fragmenting
HTTP caches.  (You could use normal HTTP caching, but you'd have to
add an extra Vary header, right?)  It would work well for some
use-cases, but maybe not enough.

So, an ideal solution would be one that

* Does not add a round-trip to any request.
* Does not require sending repeated data inline.
* Does not interfere with normal HTTP caching.
* Is not complicated or hard to understand or set up.
* Does not create many unneeded HTTP requests.

policy-uri fails the first point, inline policies fail the second
point, my suggestion fails the next two, and preemptive fetching fails
the last point.  I suspect that these five points are mutually
exclusive.  I don't think policy-uri as currently specced is a good
tradeoff between the first and second points, however.

Taking a step back: the problem with policy-uri is the browser can't
start fetching resources until the policy is retrieved, and that
normally means it can't start layout (because it can't fetch styles or
scripts).  Is it really necessary that the browser be prohibited from
speculatively requesting prohibited files?  In other words, could we
say that if policy-uri is used, the browser is still allowed to fetch
files and cache them before it receives the policy, but it isn't
allowed to actually use them until it knows it's allowed to?  This
would eliminate the extra round-trip latency entirely if the policy
can be fetched faster than the last script or style, which will be
true in most cases.  Actually, it almost satisfies all five points I
mentioned above -- it will only add a full round-trip if all essential
resources are already cached but the policy is not.

Of course, this would mean that policy-uri will not reliably stop
information leaks, but it will still stop content modification.  Is
this a reasonable direction to consider?
Received on Thursday, 21 April 2011 19:11:56 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:26:18 UTC