W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1995

Re: rethinking caching

From: Benjamin Franz <snowhare@netimages.com>
Date: Sun, 17 Dec 1995 09:55:41 -0800 (PST)
To: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Message-Id: <Pine.LNX.3.91.951217093530.4622A-100000@ns.viet.net>
On Sun, 17 Dec 1995, Koen Holtman wrote:

> Benjamin Franz:

> >This is not a safe assumption. Numerous providers sell space to many 
> >independent people on single servers. For example: www.xmission.com 
> >serves on the order of 1000 independent entities, including many 
> >businesses and people, and allows CGI to be owned by the individuals. 
> 
> The part `or that the server has some Location response header
> filtering mechanism that excludes spoofing' above is supposed to cover
> this situation.
> 
> Not that I expect many providers to implement such a filtering
> mechanism, most would treat web spoofing like they treat news spamming
> and mail forging now: forbid it in the terms of service agreement and
> deal appropriately with any found violations.

Ummmm...Considering the immense magnitude of both spamming and forging 
today, this is not a convincing argument for leaving it to local option.

[...]

> Of course, Shel's idea of making the cache key of a negotiated variant
> be the pair (request-URI, location-URI) eliminates all spoofing risks,
> we could switch to such a scheme if the consensus is that Location
> header filtering is unfeasible.  Shel's scheme is safe no matter how
> much the server administrator does about security, but has the
> disadvantage of allowing less cache hits: it would be much more
> difficult to let preemptive and reactive content negotiation share
> cache slots for the variants.  [Note: an explanation of this last
> statement would require a level of detail only appropriate in the
> content negotiation or caching subgroups.]

Never-the-less, I believe this is the route that will have to be taken.
The other route (local filtering) just places too much reliance on good
security management at the local level. It amounts really to trusting all 
system admins to 'play nice and know what they are doing' - something the 
ever growing ever growing spam/forgery problems on the Usenet and in 
E-mail have shown just is not a good assumption in general. 

Just as the default reporting of people's email addresses with the 
admonishment not to abuse it proved futile (I routinely get requests from 
my customers to 'give them the email addresses of everyone who visits 
their web site so they can email them' - I fielded exactly that request 
not two days ago from one customer), it will prove impossible in practice 
to make local filtering work. Too many local system demands (and 
insufficient knowledge on the part of admins) will make it 
nearly impossible to maintain a secure system for many people.

On large systems with thousands of customers with many special cases, it 
would be a logistical nightmare even for experienced admins.

> >Clearly there is the opportunity for someone to spoof there under the 
> >rule. It is not significantly safer than unrestricted redirections when 
> >many (most?) people share common servers.
> 
> Unrestricted 3xx redirections are another issue entirely: unrestricted
> 3xx redirection will not allow Joe to fool a proxy cache into storing
> a response from his script under John's URI.

I did not phrase what I meant well. I meant 2xx redirections without the 
proposed rule.

-- 
Benjamin Franz
Received on Sunday, 17 December 1995 09:51:52 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:31:37 EDT