W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1996

Re: Charsets revisited

From: Gavin Nicol <gtn@ebt.com>
Date: Thu, 25 Jan 1996 19:51:41 -0500
Message-Id: <199601260051.TAA00480@ebt-inc.ebt.com>
To: masinter@parc.xerox.com
Cc: glenn@stonehand.com, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
>Practically speaking, I think we have to solve different parts of the
>problem in different places. We can solve the problem of 'what is the
>character encoding used in data sent from a server to a client' by
>charset tagging and negotiation in HTTP GET; we can solve the problem
>of 'what is the character encoding used to encode what a client typed
>into a form when sent from client to server' by using
>multipart/form-data as the wrapper for the response and using charset
>tags within the parts that need them.
Sure, so long as this doesn't complicate the protocol, or
implementations thereof overly much. In many cases, it is better to
look for simplifications, which is one reason why I feel that the
special treatment of GET is illogical.

>you cannot possibly mean that the *same* HTTP server will employ
>2022-jp, shift jis, euc-j, unicode-1-1 and unicode-1-1-utf7.
There are, and will be, servers that will deal with all of these. I
know for a fact that there will be more than one server that will
convert on the fly.

>My recommendation for the solution to this problem is that we
>establish an application profile 'HTTP servers for Japanese' that
>recommends that filenames in URLs be encoded as unicode-1-1-utf7 no

This is just avoiding the problem. By having a way of indicating the
coded character set and encoding in use in a URI, you get flexibility
*and* you can still define application profiles if you want.

I proposed, some time ago, a way of doing this, and I believe the
method I proposed would require few changes to browsers, and servers. 
Received on Thursday, 25 January 1996 16:55:10 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:16 UTC