Re: caching HTTP 303 responses

Hi Giovanni,

barring the change away from 303 for non-information resources, or a
change to the cacheability of 303, one could indeed make a patch for
squid.

The way I'd go about it, not to break too much, would be to add a
request ID header which would differ for different user requests, and
the squid would cache everything within the same request ID, and it
would follow the specs for different requests.

The request ID would be treated as enabler for these "atomically
cacheable" things (everything), atomically as in "in the same user
request processing". And this could mean statefulness in squid (prolly 
a very bad thing) if there was a requirement to interleave the
processing of multiple user requests.

But thinking about this, fixing 303 cacheability or maybe adding a
cacheable 308 Description Elsewhere sounds easier now. 8-)

Jacek

On Tue, 2007-07-10 at 01:20 +0100, Giovanni Tummarello wrote:
> Hi Jacek,
> 
> unfortunately the "application cache" is not always possible. .
> The key to cluster scalability is splitting jobs across the cluster 
> nodes so each file is more or less processed per so.
> Web architecture then says that if you want to go fast.. you can cache.. 
> so one puts a large proxy where all the nodes in theory can feed. This 
> is what we thought we'd do.. just to find out that each process was 
> running a few dozen times slower than what it could (to say nothing on 
> the remote hits which is the real problem) due to squid rightfully 
> refusing to cache 303.
> We could write a "semantic web patch" for squid to explicitly violate a  
> MUST NOT.. but.. :-)
> .
> Giovanni
> 

Received on Tuesday, 10 July 2007 08:16:38 UTC