- From: Larry Masinter <masinter@parc.xerox.com>
- Date: Tue, 13 Feb 1996 23:02:15 PST
- To: mogul@pa.dec.com
- Cc: koen@win.tue.nl, http-caching@pa.dec.com
> But a simpler approach would be to simply force reloading > in the rare cases where this kind of change is made at the > origin server. E.g., if "service" (the server and/or its > files) is modified to change the variant-selection algorithm > then all the cache validators for resources with variants > have to be changed as well. Either that, or use a different > (non-overlapping) set of variant IDs, so that the cached > copies associated with old variant-IDs would become stale > after 1000 seconds (or whatever). Again, this requires > no additional protocol mechanism, and this approach requires > no extra implementation complexity in the caches. Doesn't this have some implication for what you can choose as a 'cache validator' and expect it to work? While at the caching meeting, I did say that I preferred a (request/response) model rather than an (entity) model for caching, I think probably a hybrid is what we'll wind up with; that is, there is a set of information that a cache must maintain for each URI which contains, among other things, a set of entities and information about them and the origin server's declarations about freshness. It's a simplification to say that as soon as any one piece of information associated with a URI becomes stale, all of the rest of the information should become stale too. If we make that simplification, 'freshness' applies to "the URI's information" in general, rather than any particular piece of it.
Received on Wednesday, 14 February 1996 07:17:53 UTC