W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2011

Re: SHOULD-level requirements in p6-caching

From: Mark Nottingham <mnot@mnot.net>
Date: Mon, 2 May 2011 12:42:35 +1000
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <5182A855-6DBE-4E7C-A440-EA2226891D3E@mnot.net>
To: Poul-Henning Kamp <phk@phk.freebsd.dk>

On 08/04/2011, at 6:47 PM, Poul-Henning Kamp wrote:

> In message <87AB210C-782E-4475-BD4B-A552A549588E@mnot.net>, Mark Nottingham wri
> tes:
> 
>>>> In 2.5, 
>>>> 
>>>>>  A cache that passes through requests with methods it does not
>>>>>  understand SHOULD invalidate the effective request URI (Section 4.3
> 
>>> First off, what does "not understand" mean here ?
>>> 
>>> Does that cover a cache which goes "Ohh, POST, I don't do those:
>>> pass it through" ?
>> 
>> POST is explicitly covered elsewhere in the section, so there's an 
>> overlap here; all caches are expected to do this (and more) for POST.
> 
> But that is not what the text says.
> 
> It is perfectly possible to write a cache that only "understands"
> GET and HEAD, and just pass everything else upstream.
> 
> If you want it to mean "methods not defined in this document which
> the cache does not understand", then the text should say that.

How about:

  A cache SHOULD invalidate the effective request URI when receiving requests with methods other than PUT, DELETE or POST.

(delta discussion on SHOULD vs. MUST here, and issue #235).

[...]
>>> Third: do we really want to give script kiddies their own private
>>> standards-mandated cache-invalidation button ?
>> 
>> Please read the entire section; this is not new text, and has been 
>> present in HTTP for over a decade. In short, there is a mechanism to 
>> prevent this kind of attack.
> 
> There is ?  What would that be ?
> 
> All I have to do is send a "XYZZY / HTTP/1.1" request and the
> cache is forced to dump its copy.
> 
> As many sites use CMS systems with very systematic URLs, washing
> a cache that way, and bothering the backend server a lot is a
> Simple Matter Of Programming.
> 
> dDoS attack are a fact of life, and from a standards point of view
> we SHOULD consider clients hostile to cache/server-operation
> when judging such aspects of the standard.

In proxy and browser caches, a DOS is already partially protected against:

   However, a cache MUST NOT invalidate a URI from a Location or
   Content-Location header field if the host part of that URI differs
   from the host part in the effective request URI (Section 4.3 of
   [Part1]).  This helps prevent denial of service attacks.

Furthermore, the resolution to <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/235> will make this protection stronger -- not only for those implementations, but also for gateway caches -- as it will require the cache to only invalidate upon a successful response from the origin, rather than on any client request.

My reading of why this particular requirement was only SHOULD-level was to allow future methods to specify that they don't invalidate. If that's the case still, we should say that. 

Likewise, if we want to allow caches to NOT invalidate requests -- i.e., we think that #235 doesn't offer enough protection against DoS -- we can add that condition to the list too. My interest here is making the SHOULDs describe their exceptions, rather than leaving them as open doors for interop failure. 

Cheers,

--
Mark Nottingham   http://www.mnot.net/
Received on Monday, 2 May 2011 02:43:02 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:40 GMT