- From: Larry Masinter <masinter@parc.xerox.com>
- Date: Tue, 29 Oct 1996 10:40:47 PST
- To: gjw@wnetc.com
- CC: ejw@kleber.ics.uci.edu, w3c-dist-auth@w3.org
> I like the idea of placing the burden on the update method (this sounds a > lot like write-through) but I don't see how this method can guarantee that > a returned entity is up to date. The most obvious problem is that an update > can traverse an entirely different set of caches than a previous GET, thus > allowing for the possibility that some caches would not yet be aware that > its cache entry for the resource is stale. Of course, it everything works > fine as long as both paths hit the same cache at some point. There's both a previous GET and a subsequent one: GET a / modify b / GET a where the first & second GET use the same cache but the Modify uses a different cache. In such a situation, the cache used by GET has to know to revalidate for the second one: it either has to revalidate every time or else somehow be notified. I can't think of any way around this situation. I think this could be handled by adding a requirement on HTTP clients (or intermediate proxies in a chain) that switch between proxies: If the client(proxy) has performed an UPDATE on a given URL, and then switches to a different (subsequent) proxy, the client should include in each request to any potentially affected URLs a 'max-age' which is less than the time since the last UPDATE. The simplest way to implement this is just use 'time the proxy server changed', and the next simpler is just to remember, for each host, the time since the last update method to that host. Of course, finer-grained information can be kept. This still puts the implementation burden on 'caches that can switch proxy servers', which is probably where it belongs. Larry
Received on Tuesday, 29 October 1996 18:53:52 UTC