Re: Comments on draft-ietf-http-v11-spec-rev-03

Jeffrey Mogul:
>
>Koen Holtman wrote:
>
>> - Section 13.10:
>> 
>> This section introduces a new (as far as I can see) requirement:
>> 
>> #  A cache that passes through requests for methods it does not understand
>> #  should invalidate any entities referred to by the Request-URI.
>> 
>> This may seem like a good safety measure on the surface but I think
>> that it is in fact quite damaging.  First, designers of new methods
>> cannot benefit much from the above rule because 1.0 and 2068 caches
>> will not adhere to it.  On the other hand, the new rule introduces a
>> performance penalty for new methods which do not in fact cause any
>> invalidation.  One such method would be M-GET, a GET extended with a
>> mandatory extension, for example.  The performance penalty blocks
>> implied by the new rule makes certain ways of extending the protocol
>> too expensive and thus shortens the lifetime of the 1.x suite.  I want
>> the requirement to be removed.
>
>Dave Kristol wrote:
>    I think I'm the instigator of this change.  While your example
>    seems benign enough, the danger is from methods that change the
>    underlying object, e.g., M-PUT.    The object in the cache would no
>    longer look like the one at the origin server and must be
>    invalidated.  In the absence of a way to tell intervening caches to
>    invalidate their view of the object the proxy cache has to do so by
>    default.
>
>    I suppose a compromise would be for a cache to mark a cached object
>    as "must-revalidate" when it sees an unknown method that it passes
>    along.  Cache experts:  would that work?
>    
>How does
>	mark the cached object as "must-revalidate"
>differ from
>	invalidate the cached object
>
>except that the former propagates the change to outbound caches?
>
>I'm not sure that Koen would view this as a compromise :-)

You are right, I don't.

>Would it work?  Well, the concept of invalidation-based protocols
>is in general not supported by HTTP.  My preference is to err on
>the side of transparency rather than performance, although I agree with
>Koen that the transparency in this case might be somewhat illusory.

I agree that Dave's new rule would add some extra safety to protect
against outdated cache entries, but no absolute safety.  The HTTP
caching system was never designed to offer absolute safety.  The
system is unsafe in several ways: there may be a mesh of proxy caches
so that some caches don't see the passing M-PUT, a proxy cache may
switch to gateway mode, thus avoiding having to care about any rule
applying to proxies, and legacy proxy caches won't invalidate anyway.

The amount of safety added by the new rule is so small that it does
not outweigh the loss in extensibility.  Especially considering that a
designer of a new method can have the same extra safety that would be
provided by the new rule -- IF he wants it -- by including a
content-location header in the response to the new method (see section
16.3 last paragraph).

So as the safety/transparency side is very weak, I'd rather err on the
side of extensibility.


>But I'm not sure what the problem is; my understanding is that
>the whole point of creating the M-GET method is to prevent
>"proxies that do not understand the method" from forwarding it.

Hmm, as far as I recall, proxies which don't understand the method
_will_ forward it in general.  It is origin servers which are expected
to return an error message.

>I.e., they are supposed to return 501 (Not Implemented) or act
>as a tunnel (i.e., not cache anything).

If they act as a tunnel, they also won't invalidate anything in the
existing cache memory, which was the whole point of the rule.
 
>So any caching proxy that does forward M-GET does "understand" it, and
>isn't covered by the requirement that Koen objects to.
>
>-Jeff

Koen.

Received on Monday, 30 March 1998 10:46:12 UTC