W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > May to August 1997

cache-busting and charsets again

From: <W.Sylwestrzak@icm.edu.pl>
Date: Wed, 11 Jun 1997 04:42:27 +0200 (MET DST)
Message-Id: <199706110242.EAA17788@galera.icm.edu.pl>
To: advax@triumf.ca
Cc: http-wg@cuckoo.hpl.hp.com, ircache@nlanr.net
X-Mailing-List: <http-wg@cuckoo.hpl.hp.com> archive/latest/3482
Andrew Daviel:

> > Unfortunately most of the servers practicing this today
> > try  to perform a 'naive' content negotiation, which effectively
> > uses redirects to other urls. This is of course wrong,
> > because it unnecessarily expands the url addressing space,
> > thus making caching less effective.
> I don't think so ... If I have A.var, which redirects to 
> A.en.html, A.jp-jis.html, A.jp-eu.html, A.fr.html I have one
> small uncacheable redirect, and 4 cacheable documents. The 4 documents
> are all different, and have distinct URLs, so are cached independantly.

I totally agree with your example.

However I strongly feel that 'charset negotiation' should be approached
differently than language and other stuff. Because various versions
of the same document differing only in character encoding are
effectively the same object and should not be cached, indexed etc.

> > From the caching point of view it would be a very good practice
> > for the clients to request/expect a single, standard charset
> > for a given language (considered being a 'transport' charset). 
> Nice idea; pity everyone's platform uses different coding :-(
> (shift-jis, jis, euc-jp; koi-8, 8859-5, Windows-xxx etc etc.)
> I think in some cases DOS, Windows, X11 and Mac are all different.

I'm not knowledgeable about non-european sets, but
for most central-eastern European language ISO-8859-2 would be
sufficient (and browsers for most platforms accept it) - so why
complicating this ? But perhaps this is wrong example.

Received on Tuesday, 10 June 1997 19:44:34 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:20 UTC