Re: On Opaque validators

>  (A)    Suppose to have a peer-entity caching mechanism (like the
> 	one present in Harvest). In such a scheme, a cache is allowed
> 	to look for the missing object in a pool of neighbor caches,
> 	by means of a multicast request, and to chose the best hit
> 	(if available). Then, if multiple (fresh) hits with
> 	different `Cache-validator' value are returned (*), the cache
> 	should have a way to chose the best hit (the newer object).
> 
> I can think of several interpretations of this scenario:
> 
>     (1) the origin server assigned a fresh-until time to the resource,
>     but chose too long a time, and modified it before the time ran
>     out.  This led to two caches having different "fresh" copies of the
>     resource, and one of them isn't the "right" copy.  One could argue
>     that this is a failure, but it's not a failure of the protocol,
>     it's a failure of the server to predict the proper freshness
>     lifetime.
>
>     (2) the origin server has generated several different copies during
[...]
>     (3) the origin server didn't specify a fresh-until value (under my
[...]

(1) and (2) are what I thought, (3) is an argument more.


> So I might add the rule that if a cache has a choice between
> two cached responses for the same resource that are both fresh,
> it should use the one with the later Date: header.  OK?

I basically agree: it works, but, this way, `Date: header' becomes
(de facto) part of the cache validator. It remains to establish
which solution is cleaner. Moreover using the `Date: header' field
implies that this header must be cached, although I do not know if
it is to be done for some other reason.


>     
>    (*) the only way to avoid that is to forbid a server to generate
>     a new validator when still exists a fresh copy of the object with
>     older validator.
> 
> I don't think we want to do this, because although the server cannot
> force all of those other copies to disappear, it should certainly be
> able to prevent new copies from being created with excessively short
> lifetimes.  And we also have no way for the server to discover when
> that last fresh copy has been given out, except if it simply does not
> give out any more copies during the original freshness lifetime.

That is what I think too.


> The client could implement the rule "don't accept a non-firsthand
> response that has a Date: older than the response you already have";
> if it receives one of these, it could retry with a "Cache-control:
> revalidate" to force a check with the origin-server.
> 
> If you think this is too much overhead, would it be sufficient
> for the protocol to include a Cache-control: order-by-date
> (sent by the origin server in a response) to force it to happen
> for those few resources where it matters? 

Again the `partially opaque validator' seems to cope with that in
a cleaner way.


	Lorenzo.

Received on Tuesday, 9 January 1996 01:58:43 UTC