W3C home > Mailing lists > Public > http-caching-historical@w3.org > February 1996

Ordered 'opqque' validators

From: Shel Kaphan <sjk@amazon.com>
Date: Sat, 3 Feb 1996 23:36:27 -0800
Message-Id: <199602040736.XAA26453@bert.amazon.com>
To: "David W. Morris" <dwm@shell.portal.com>
Cc: HTTP Caching Subgroup <http-caching@pa.dec.com>
David W. Morris writes:
 > In prior discussions of opaque validators, someone suggested that
 > opaque validators should have an order property for the case where
 > a client is utilizing multiple caches which may not be synchronized.
 > After discussion in todays subgroup meeting, Jeff Mogul suggested
 > that the Date: header value could be used in conjunction with
 > the validator to establish the order.
 > On the surface, other than the possiblity that the Date: would
 > not have sufficient granularity and hence cause an unproductive
 > IF-VALID RTT, this seems OK. Does this seem reasonable?
 > Dave Morris

To use Date: in this way, wouldn't it also be necessary for it to be
passed somehow as part of every conditional request?  I.e. a
conditional GET would also need to pass the date associated with the
validator in order for an intermediate proxy to be able to do an order
comparison if there was a mismatch.  This seems to be a bit
cumbersome -- we've reinvented if-modified-since!

Another approach I tried (somewhat unsuccessfully) to put forward at
the meeting is to leave this optimization (and that's all that it is)
entirely up to a cache implementation.  I believe I have described how
to do this before.  A cache can get pretty good coverage if it keeps a
sequence of validators associated with a changing resource.  If a
validator in a conditional request ever matches a no-longer-used
validator for a resource, and the cache contains a "fresh" version of
the resource, it can return that fresh version.  This only misses when
the cache has never seen the request's validator before -- but notice,
it only misses the first time!  The benefit is that no additional
protocol is needed, and a cache implementation is free not to do this
optimization at all.

To enable this reliably, one constraint this would require us to put
on validators is that a given validator must never occur twice for two
different versions of the same object.  Is this a reasonable

Received on Monday, 5 February 1996 00:27:56 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:55:57 UTC