Re: the case for multiple entities per URL

Larry Masinter:
>some more steps here ... but ultimately leading to the conculsion
>THEREFORE the protocol must support caching of multiple entities for
>the same URL, in that a proxy may return different fresh entities for
>the same URLs as long as the proxy determines that the request headers
>of the subsequent requests match the appropriate request headers of
>the original request that evoked the original entity.

I agree.  My report on the Paris stuff did not intend to contradict
this requirement.

It merely discussed a new way of talking about these multiple entities
which we want to be cached for a single URI.  Instead of a model in
which these entities are bound directly to the generic resource, it
outlined a model in which they are bound to variant resources which in
turn are bound to the generic resource.  Caching proxies would still
be matching request headers: the only difference is that in this
model, the proxy would match request headers to choose the appropriate
variant resource, instead of choosing the appropriate entity.

>I think the only difficulty comes when a resource might have several
>(stale) entities associated with it in a given cache and the cache
>recieves a new fresh entity, it isn't clear which of the old stale
>entities might be discarded. Personally, I think this is a cache
>optimization issue and not a protocol correctness issue; I can think
>of several heuristics that a cache might employ to do a reasonable job
>in such a case.

We have been through this before: it is a efficiency issue, but most
of us want the protocol to contain a mechanism which helps caches to
be more efficient.  This mechanism could either be a variant-ID or a
Content-Location header.  The `several heuristics' possible if such a
mechanism is lacking are not considered to be good enough by most of



Received on Wednesday, 15 May 1996 01:27:58 UTC