- From: Andrew Daviel <andrew@andrew.triumf.ca>
- Date: Wed, 4 Dec 1996 16:26:34 -0800 (PST)
- To: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Why can't (shouldn't) one cache a CGI response ? It seems to me more rational to flush cache based on the frequency of hits. For example, the "help" page at altavista is CGI-generated from a query, but as far as I know it's static. It's perfectly reasonable to generate static pages from a database using CGI or otherwise, and it's quite possible to set all the headers Last-Modified, Expires, Content-Length etc. in an appropriate manner. I use a Squid cache set to reject "/imagemap" and not much else (though not to pass cgi-bin or ? to the parent). Perhaps 5% of queries are cache hits, compared to around 16% of images. If someone looks up "Soccer in Latvia" in a search engine, is it really going to change in ten minutes? A day? More so than http://www.obscure.org/some/really/obscure/page.html ? Re. charsets, content negotiation, etc in HTTP 1.0 - I decided as a compromise that using a redirect CGI is "mostly harmless". True, if the origin server can't be reached you're stuck, but the big text files can be cached. I think Microsoft's doing something like this, but as someone pointed out to me, they use set-cookie with a path of / which strictly speaking makes the whole site uncacheable. Andrew Daviel mailto:advax@triumf.ca http://vancouver-webpages.com/CacheNow/ - campaign for Proxy Cache
Received on Wednesday, 4 December 1996 17:57:22 UTC