W3C home > Mailing lists > Public > www-html@w3.org > October 2003

Re: allow UTF-16 not just UTF-8 (PR#6774)

From: Ernest Cline <ernestcline@mindspring.com>
Date: Wed, 8 Oct 2003 14:28:58 -0400
Message-ID: <410-220031038182858843@mindspring.com>
To: "W3C HTML List" <www-html@w3.org>

> [Original Message]
> From: <don@lexmark.com>
> Date: 10/8/2003 1:01:48 PM
> Subject: Re: allow UTF-16 not just UTF-8 (PR#6774)
> So let me understand this....
> Because people have poorly designed and written XML applications running
> 3 GHz Pentium 4s with 512 megabytes of real memory that do not allow the
> control over whether UTF-8 or UTF-16 are emitted, we are expecting to
> burden $49 printers with code to be able to detect and interpret both.
> I maintain my objection and my no vote.

I don't have any direct interest in this, but the costs of detection and
interpretation seem to be minimal with much less than 1k of added
code required to be able to do both. Depending upon how the
code is organized, I can see how it might induce a slight
performance penalty for one of the encodings, but that would
depend upon how the it handles character representation
internally, and would not impact the performance for the
preferred charset at all (presumably UTF-8 or UCS-4).

So in short, I fail to see how UTF-16 support will affect the
economic cost of a $49 printer, and allow it to handle
UTF-16 natively, altho perhaps with a performance
penalty relative to UTF-8.

(The UTF-16 to UTF-8 translator that Jim referenced earlier
used a table to increase processing speed, at the cost of
memory for the table. Given the constraints on memory
for a printer, accepting the performance hit of doing the
conversion each time instead might  make more sense.)
Received on Wednesday, 8 October 2003 14:28:58 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 30 April 2020 16:20:51 UTC