W3C home > Mailing lists > Public > http-caching-historical@w3.org > January 1996

Caching the results of POSTs and PUTs

From: Shel Kaphan <sjk@amazon.com>
Date: Thu, 4 Jan 1996 14:58:33 -0800
Message-Id: <199601042258.OAA04012@bert.amazon.com>
To: Jeffrey Mogul <mogul@pa.dec.com>
Cc: Paul Leach <paulle@microsoft.com>, http-caching@pa.dec.com
Jeffrey Mogul writes:
 > I suppose it doesn't really matter, from a protocol point of view,
 > which of these cache-lookup approaches the cache takes. 
 > In either case, caching follows the same rules as for GET responses:
 > the server provides a fresh-until time, and the cache must validate
 > non-fresh entries with the origin server.

But even if the cache contains a "fresh" result which could be doled
out on subsequent GET requests, *all* POST requests must be forwarded
to the origin server (absent a scheme such as I suggested earlier).

 > Validation could be done using a conditional POST.  A conditional POST
 > has the same form as a normal POST (including the entire entity body),
 > but includes the cache-validator returned by the server in its earlier
 > response.  The meaning of a conditional POST is "look at the URI,
 > entity body, and validator in this request: if you would give me
 > the exact same response as you gave before, including the same
 > validator, then just tell me '304 Not Modified'; otherwise, do a
 > normal POST for me."

But in either case, the server must perform the same side-effects.

 > Does this make sense?  It seems like this is what Shel and Paul
 > are trying to tell me, anyway, and I think it would work.

Paul is telling you something different from what I'm telling you (so far).

 > Note that this still follows my proposed rule that write-through
 > is mandatory, in the following sense: if the server has granted
 > permission to cache a value (whether from a POST or a GET) for
 > some period, using the fresh-until header, then it's giving up
 > any hope of imposing cache consistency for that duration. 
 > If the server does not grant this permission, then every POST
 > request causes an interaction with the origin server (although
 > the response entity body may not have to be transmitted over the
 > entire response chain).

You can't use cache freshness to control whether a request is allowed
to have its side effects at the origin server.  From the user's point
of view, an action with side effects must have the same semantics no
matter what is going on with the caches.

 > In any case, if *new* data is being POSTed, this data is always
 > sent directly to the origin server (because the cache-lookup
 > rules would not match in this case).

It doesn't matter if it's new or old.  If I order the same bag of
hex bolts twice, I want two bags of hex bolts, not one.

 > In the case of a PUT, we can probably add this optimization:
 > the cache may store the request's entity-body together with
 > the cache-validator for the server's response, and use this
 > to respond to subsequent GETs of the resource.  This is because
 > a PUT is supposed to replace the resource with the specified
 > entity body.  The server may override this behavior with an
 > explicit Cache-control: no-cache.
 > I would recommend that the origin server should give a fresh-until value
 > of zero in the PUT response, meaning that the cache will have to
 > validate the entry each time before using it in a response.  This
 > is because a PUTable resource may be changed via several paths, and
 > any blind caching could lead to update inconsistencies.  However,
 > this still avoids transmitting the actual entity-body all the time,
 > until it changes.
 > -Jeff

That sounds reasonable.

Received on Thursday, 4 January 1996 23:17:53 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:55:57 UTC