Caching dynamically generated documents

Luigi Rizzo writes:
	...
 > 
 > Obviously the problem is that the code is run on the server, instead
 > of as close as possible to the client. There are several drawbacks
 > in this approach:
 > 
 > * many more bytes than necessary are transferred;
 > * the server is unnecessarily kept busy generating and transferring
 >   all the above traffic;
 > * the data is essentially uncacheable because of the variety of
 >   possible requests.
 > 

Doesn't the utility of caching depend on the frequency and
distribution of the requests?  If the database is small but heavily
used, caching on an intermediate server could win even if the whole
database was cached.  Seems like the answer to whether caching is
useful here is "it depends".

 > It is also a problem for caches, which must either give up or
 > develop complex and memory consuming techniques essentially to
 > try to reconstruct the behaviour of the server from its responses.

I don't think caches should have to be very smart, especially in terms
of simulating what servers will do.  LRU is a pretty good technique
for managing memory too.

 > And there is also a terrible [:)] thing: your cache statistics' are
 > negatively affected by these large, uncacheable items.  More
 > seriously, these "uncachable" items might, in many cases, become
 > easily cacheable.
 > 

Caches could limit the number of slots held for a particular URL, up
to the '?' (for GET), and not including the request body (for POST).  This could
limit the cache-busting potential of a single heavily used form, for instance.

--Shel

Received on Friday, 5 January 1996 21:14:58 UTC