W3C home > Mailing lists > Public > public-i18n-core@w3.org > October to December 2010

[Bug 10890] i18n comment : Allow utf-16 meta encoding declarations

From: <bugzilla@jessica.w3.org>
Date: Fri, 01 Oct 2010 10:07:03 +0000
To: public-i18n-core@w3.org
Message-Id: <E1P1cW7-00054D-Qn@jessica.w3.org>

--- Comment #2 from I18n Core WG <public-i18n-core@w3.org> 2010-10-01 10:07:02 UTC ---
(In reply to comment #1)

> When the file is not actually encoded in UTF-16, <meta charset="utf-16"> means
> the same as <meta charset="utf-8"> and is a clear authoring error. It would be
> completely illogical not to flag it as an error. That Web compat requires this
> UA behavior is evidence of authors getting things wrong when they try to use
> <meta charset="utf-16">.

I am not proposing any change to the spec where the file is not actually
encoded in UTF-16. I was careful to say 'in UTF-16 encoded documents'.

> When the file is actually encoded in UTF-16, <meta charset="utf-16"> has no
> effect. 

Exactly. So it's not an issue for character detection. However, my point is
that there are usability issues. (a) Without a meta element you can't tell the
encoding by visual inspection. (b) People will continue to use these meta
elements for UTF-16. If there is no harm in it, why force them to change their
code? (c) Because the UTF-16 rules for meta are different from other encodings,
the author has to always remember to handle UTF-16 in a special way. Why force
validators to always check and educational materials to always explain a
special exception for meta elements in UTF-16 encoded documents when a <meta
charset=utf-16> in such a document does no harm?

Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.
You reported the bug.
Received on Friday, 1 October 2010 10:07:06 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:23:06 UTC