W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > May to August 1996

Re: proposed HTTP changes for charset

From: Gavin Nicol <gtn@ebt.com>
Date: Sun, 7 Jul 1996 00:22:28 GMT
Message-Id: <199607070022.AAA21941@wiley.EBT.COM>
To: dave@fly.cc.fer.hr
Cc: fielding@liege.ICS.UCI.EDU, yergeau@alis.ca, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
X-Mailing-List: <http-wg@cuckoo.hpl.hp.com> archive/latest/1044
>I hacked my server a bit, wrote several CGI programs and it's a little
>smarter than others. It can convert HTML pages to 5 different code pages or
>3 different ASCII approximations on the fly. I'll probably add some more
>output representations. I think Macs use the 6th code page for Latin 2
>and two more approximations would be handy.
>The conversion is automatic if browser sends Accept-charset header.
>Lynx 2.5 is the only one at the moment. Other browsers will receive
>some kind of menu.

Ditto with DynaWeb, except that DynaWeb it supports far,
more encodings. As it is, with Japanese, you have to make some
arbitrary decision based soley on things like User-Agent for deciding
what to send to the client, and what the client is sending to you.

>If the agent can use charsets other than ISO 8859-1, then it MUST, MUST
>and MUST send Accept-charset header with those charsets listed.

I agree with this sentiment 100%. Unless browsers start sending
information to servers, it is impossible to add multilingual
intelligence to them, and have them work all the time.
Received on Saturday, 6 July 1996 17:27:24 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:17 UTC