- From: Paul Leach <paulle@microsoft.com>
- Date: Fri, 5 Jan 96 16:51:08 PST
- To: mogul@pa.dec.com
- Cc: http-caching@pa.dec.com
Jeff said: ] ] I think I'm beginning to understand, but I need a little more help. ] I can see two ways to think of caching the result of a POST: ] ] (1) The cache key consists of the URI (let's ignore content ] negotiation for now). A cache storing the response to a POST must ] also store the entity body from the corresponding request. It may ] return the cached response to a subsequent request only if the new ] request has the same entity body. In other words, the cache can ] hold at most one cached POST-response for a given URI. ] ] or ] ] (2) The cache key consists of the (URI, POST-request-body) tuple. ] A cache can store multiple POST-responses for a given URI, by ] disambiguating them using the request bodies. ] ] I suppose it doesn't really matter, from a protocol point of view, ] which of these cache-lookup approaches the cache takes. I think I agree, but there might be a case that is being overlooked. There are two cases of caching for POST that I can identify: 1. The cache key is (Request-URI, Request-entity-body) 2. The cache key is (Location-URI) from the response (modulo the spoofing issue). In both cases, there are Expires: and Cache-Control: max-age headers from the response kept with the cached data. In case 1, I agree with the suggestion for Cache-Control: no-side-effects being required in the response in order to make the entry in the cache. In case 1, subsequent POSTs with the same Request-URI and Request-entity-body can be satisfied from the cache, if the cached entity is still fresh. A GET with that Request-URI can *not* be served from the cache entry with this key. (Although it could be served from an entry created via a prior GET of this Request-URI.) In case 2, subsequent GETs with Request-URI the same as the Location-URI can be served from the cache (if the cached entity is still fresh). Subsequent POSTs can not be served from the entry with this key. ] ] In either case, caching follows the same rules as for GET responses: ] the server provides a fresh-until time, and the cache must validate ] non-fresh entries with the origin server. ] ] Validation could be done using a conditional POST. A conditional POST ] has the same form as a normal POST (including the entire entity body), ] but includes the cache-validator returned by the server in its earlier ] response. The meaning of a conditional POST is "look at the URI, ] entity body, and validator in this request: if you would give me ] the exact same response as you gave before, including the same ] validator, then just tell me '304 Not Modified'; otherwise, do a ] normal POST for me." This seems right for case 1. Case 2 doesn't apply to POSTs. ] ] Does this make sense? It seems like this is what Shel and Paul ] are trying to tell me, anyway, and I think it would work. ] ] Note that this still follows my proposed rule that write-through ] is mandatory, in the following sense: if the server has granted ] permission to cache a value (whether from a POST or a GET) for ] some period, using the fresh-until header, then it's giving up ] any hope of imposing cache consistency for that duration. ] If the server does not grant this permission, then every POST ] request causes an interaction with the origin server (although ] the response entity body may not have to be transmitted over the ] entire response chain). I agree -- while we could invent a write-back protocol, I think we should defer until 1.2 at least. ] ] In any case, if *new* data is being POSTed, this data is always ] sent directly to the origin server (because the cache-lookup ] rules would not match in this case). ] ] In the case of a PUT, we can probably add this optimization: ] the cache may store the request's entity-body together with ] the cache-validator for the server's response, and use this ] to respond to subsequent GETs of the resource. This is because ] a PUT is supposed to replace the resource with the specified ] entity body. The server may override this behavior with an ] explicit Cache-control: no-cache. I agree with this. ] ] I would recommend that the origin server should give a fresh-until value ] of zero in the PUT response, meaning that the cache will have to ] validate the entry each time before using it in a response. This ] is because a PUTable resource may be changed via several paths, and ] any blind caching could lead to update inconsistencies. However, ] this still avoids transmitting the actual entity-body all the time, ] until it changes. I don't see why the fact that the entity got in the cache via PUT leads to any different freshness considerations than when it gets there via GET. Both ways, if the entity gets changed via some other path, then the cached copy will be stale. In other words, why don't you also recommend that the origin server set fresh-until to 0 in response to GETs? Paul
Received on Saturday, 6 January 1996 01:02:51 UTC