Re: Does no-store in request imply no-cache?

This is easily solved then if this particular type of (mutant?) 
cache/router does not choose to do this.

It's still in the position to proxy the non-safe request, and get the 
result the same way it got the original content to be cached.  If it 
chooses not to, it may make sense for it to invalidate on request.

But I wouldn't extend that to a MUST-level requirement for all 
intermediaries in the spec.  Especially not one that would introduce a 
cache-denial attack surface.

There have been comments made in this list in the past about the 
suitability for consideration of intercepting proxies into the spec, and 
I understood the consensus was that such devices were aberrent enough 
not to warrant pollution of the spec.

There are further scenarios.

For instance reverse proxy through a cache farm.  the reverse proxy 
could make method-specific decisions about whether to pipe a request 
through an up-stream cache, or connect direct to O-S.  In this case, the 
cache wouldn't even see the non-safe request let alone the response.

I think these fall under the mantle of bad engineering decisions. At the 
very least understanding the full consequences of such decisions in 
relation to caching and cache invalidation is vital for it to work.


@Mark - sorry, this isn't OT - we're discussing key reasoning behind a 
suggested MUST-level requirement for cache invalidation on request 
(rather than response).


Regards

Adrien



On 18/10/2010 5:24 p.m., Eric J. Bowman wrote:
> Adrien de Croy wrote:
>> talking about IP and tracerts is a complete red herring.  These
>> agents are the parties in TCP connections.  Sure, the IP packets may
>> go via different routers between the endpoints, but the endpoints are
>> the endpoints.
>>
> Exactly, which is why this is no red herring...
>
> A = user-agent
> B = origin server
> C = cache
>
> The route from A to B passes through C, the route from B to A passes
> through D.  User-agent A sends a GET request to the origin server B.
> The request is a hit on cache C, so the response goes from C to A.
>
> In the event of a cache miss it is not A, but C, making the request to
> B -- but only for safe methods, otherwise C is not an endpoint.
>
> The user at A changes the representation and makes a PUT request to B.
> Cache C intercepts this request, and *routes* it to B.  B then sends a
> 200 OK response to A, which does not pass through C.
>
> This is because caches are stand-ins for origin servers, not user-agent
> proxies.  B knows nothing of C, because A made the PUT request.
>
> So, C only knows the status of the response to the PUT if the route
> from B to A is the same as the route from A to B.  When dealing with
> unsafe request methods, intermediaries are not participants, only the
> user-agent and origin server are endpoints.
>
> -Eric

-- 
Adrien de Croy - WinGate Proxy Server - http://www.wingate.com

Received on Monday, 18 October 2010 05:52:50 UTC