W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

cache invalidation and "denial of service" [was: WGLC #363: rare cases]

From: Mark Nottingham <mnot@mnot.net>
Date: Tue, 3 Jul 2012 12:16:53 +1000
Cc: Julian Reschke <julian.reschke@gmx.de>, HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <9A5F78F1-AED1-4EFA-9185-0CF76126D362@mnot.net>
To: John Sullivan <jsullivan@velocix.com>
I see what you're saying. I wouldn't resist removing that sentence. Anybody that would?


On 27/06/2012, at 10:00 PM, John Sullivan wrote:

> Mark Nottingham wrote:
>> Those are reasons not to generate a LM, but they aren't specific to HTTP/1.0 -- agreed?
> Sure.
>> I can see dropping the last sentence and qualifying the remaining SHOULD with these sorts of reasons, but not retaining it.
> Absolutely - explain, not ignore or gloss over.
> There's actually another bit which does much the same thing  that
> I've got on my list of quibbles:
> RFC 2616 S13.10
>   In order to prevent denial of service attacks, an invalidation based
>   on the URI in a Location or Content-Location header MUST only be
>   performed if the host part is the same as in the Request-URI.
> httpbis-p6-cache S2.6
>   However, a cache MUST NOT invalidate a URI from a Location or
>   Content-Location response header field if the host part of that URI
>   differs from the host part in the effective request URI (Section 5.5
>   of [Part1]).  This helps prevent denial of service attacks.
> It's never fully explained how such a denial of service attack would
> manifest, nor how serious it could be. The immediate possibility is
> being able to effectively clear a cache causing more load on the
> origin that usual, or more transit from the cache than usual, but I
> have difficulty seeing how that is a serious concern:
> This basically requires a malicious origin to work. If the cache is
> inside the control of the origin, then presumably the origin isn't
> malicious.
> If the cache is outside the control of the origin, then if the target
> is a different origin it presumably has to be able to deal with
> a reasonable level of uncached requests hitting it anyway. If the
> target is the cache itself then (without the vhost check) we can
> clear at most 2 unrelated URLs per incoming client request. But a
> client could clear those URLs anyway by sending max-age=0 and dropping
> the connection after the first body byte. This will cause many
> implementations to drop the previously cached response and cease
> storing the new one too.
> So I don't see how malicious use of Location/Content-Location is
> much worse than other techniques, and it's arguably harder to pull
> off than a purely client driven attack.
> I'm not suggesting dropping the vhost check here, it seems like a
> reasonable idea, but the justification seems overblown.
> John
> -- 

Mark Nottingham   http://www.mnot.net/
Received on Tuesday, 3 July 2012 02:17:19 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:04 UTC