W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2011

Re: SHOULD-level requirements in p6-caching

From: Poul-Henning Kamp <phk@phk.freebsd.dk>
Date: Fri, 08 Apr 2011 08:47:05 +0000
To: Mark Nottingham <mnot@mnot.net>
cc: HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <4639.1302252425@critter.freebsd.dk>
In message <87AB210C-782E-4475-BD4B-A552A549588E@mnot.net>, Mark Nottingham wri
tes:

>>> In 2.5, 
>>> 
>>>>   A cache that passes through requests with methods it does not
>>>>   understand SHOULD invalidate the effective request URI (Section 4.3

>> First off, what does "not understand" mean here ?
>> 
>> Does that cover a cache which goes "Ohh, POST, I don't do those:
>> pass it through" ?
>
>POST is explicitly covered elsewhere in the section, so there's an 
>overlap here; all caches are expected to do this (and more) for POST.

But that is not what the text says.

It is perfectly possible to write a cache that only "understands"
GET and HEAD, and just pass everything else upstream.

If you want it to mean "methods not defined in this document which
the cache does not understand", then the text should say that.

>> Second: Are we sure this complies with Principle Of Least Astonishment ?
>
>Can you say a bit more here? 

It is not at all obvious to me why a random method always MUST
automatically invalidate a cached copy.   If we want to make this
mandatory, I would prefer to have more and better reasons than
"we'd like to get rid of 'SHOULD'" in the text.

>> Third: do we really want to give script kiddies their own private
>> standards-mandated cache-invalidation button ?
>
>Please read the entire section; this is not new text, and has been 
>present in HTTP for over a decade. In short, there is a mechanism to 
>prevent this kind of attack.

There is ?  What would that be ?

All I have to do is send a "XYZZY / HTTP/1.1" request and the
cache is forced to dump its copy.

As many sites use CMS systems with very systematic URLs, washing
a cache that way, and bothering the backend server a lot is a
Simple Matter Of Programming.

dDoS attack are a fact of life, and from a standards point of view
we SHOULD consider clients hostile to cache/server-operation
when judging such aspects of the standard.

>This trips people up just as much because they forget to include Vary 
>when they really need to.

Indeed.

>There's been a bit of discussion about defining a new mechanism for 
>refining the cache key; there might be a draft soon. That's not a WG 
>item, however (but it can still be talked about on-list).

Please keep me posted.

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.
Received on Friday, 8 April 2011 08:47:29 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:39 GMT