W3C home > Mailing lists > Public > whatwg@whatwg.org > January 2012

[whatwg] Default encoding to UTF-8?

From: Leif Halvard Silli <xn--mlform-iua@xn--mlform-iua.no>
Date: Tue, 3 Jan 2012 23:34:31 +0100
Message-ID: <20120103233431654515.921c1a3c@xn--mlform-iua.no>
Henri Sivonen, Tue Jan 3 00:33:02 PST 2012:
> On Thu, Dec 22, 2011 at 12:36 PM, Leif Halvard Silli wrote:

> Making 'unicode' an alias of UTF-16 or UTF-16LE would be useful for
> UTF-8-encoded pages that say charset=unicode in <meta> if alias
> resolution happens before UTF-16 labels are mapped to UTF-8.

> Making 'unicode' an alias for UTF-16 or UTF-16LE would be useless for
> pages that are (BOMless) UTF-16LE and that have charset=unicode in
> <meta>, because the <meta> prescan doesn't see UTF-16-encoded metas.

Hm. Yes. I see that I misread something, and ended up believing that 
the <meta> would *still* be used if the mapping from 'UTF-16' to 
'UTF-8' turned out to be incorrect. I guess I had not understood, well 
enough, that the meta prescan *really* doesn't see UTF-16-encoded 
metas. Also contributing was the fact that I did nto realize that IE 
doesn't actually read the page as UTF-16 but as Windows-1252: 
<http://www.hughesrenier.be/actualites.html>. (Actually, browsers does 
see the UTF-16 <meta>, but only if the default encoding is set to be 
UTF-16 - see step 1 of ' Changing the encoding while parsing' 

> Furthermore, it doesn't make sense to make the <meta> prescan look for
> UTF-16-encoded metas, because it would make sense to honor the value
> only if it matched a flavor of UTF-16 appropriate for the pattern of
> zero bytes in the file, so it would be more reliable and straight
> forward to just analyze the pattern of zero bytes without bothering to
> look for UTF-16-encoded <meta>s.

Makes sense.

   [ snip ]
>> What we will instead see is that those using legacy encodings must be
>> more clever in labelling their pages, or else they won't be detected.
> Many pages that use legacy encodings are legacy pages that aren't
> actively maintained. Unmaintained pages aren't going to become more
> clever about labeling.

But their Non-UTF-8-ness should be picked up in the first 1024 bytes?

  [... sniff - sorry, meant snip ;-) ...]

> I mean the performance impact of reloading the page or, 
> alternatively, the loss of incremental rendering.)
> A solution that would border on reasonable would be decoding as
> US-ASCII up to the first non-ASCII byte

Thus possibly prescan of more than 1024 bytes? Is it faster to scan 
ASCII? (In Chrome, there does not seem to be an end to the prescan, as 
long as the text source code is ASCII only.)

> and then deciding between
> UTF-8 and the locale-specific legacy encoding by examining the first
> non-ASCII byte and up to 3 bytes after it to see if they form a valid
> UTF-8 byte sequence.

Except for the specifics, that sounds like more or less the idea I 
tried to state. May be it could be made into a bug in Mozilla? (I could 
do it, but ...)

However, there is one thing that should be added: The parser should 
default to UTF-8 even if it does not detect any UTF-8-ish non-ASCII. Is 
that part of your idea? Because, if it does not behave like that, then 
it would work as Google Chrome now does work. Which for the following, 
UTF-8 encoded (but charset-un-labelled) page means, that it default to 

<!DOCTYPE html><title>???</title></html>

While it for this - identical - page, would default to the locale 
encoding, due to the use of ASCII based character entities, which 
causes that it does not detect any UTF-8-ish characters:

<!DOCTYPE html><title>&#xe6;&#xf8;&#xe5;</title></html>

As weird variant of the latter example is UTF-8 based data URIs, where 
all browsers (that I could test - IE only supports data URIs in the 
@src attribute, including <script at src>) default to the locale encoding 
(apart for Mozilla Camino - which has character detection enabled by 

data:text/html,<!DOCTYPE html><title>%C3%A6%C3%B8%C3%A5</title></html>

All the 3 examples above should default to UTF-8, if the "border on 
sane" approach was applied.

> But trying to gain more statistical confidence
> about UTF-8ness than that would be bad for performance (either due to
> stalling stream processing or due to reloading).

So here you say tthat it is better to start to present early, and 
eventually reload [I think] if during the presentation the encoding 
choice shows itself to be wrong, than it would be to investigate too 
much and be absolutely certain before starting to present the page.

Later, at Jan 3 00:50:26 PST 2012, you added:
> And it's worth noting that the above paragraph states a "solution" to
> the problem that is: "How to make it possible to use UTF-8 without
> declaring it?"


> Adding autodetection wouldn't actually force authors to use UTF-8, so
> the problem Faruk stated at the start of the thread (authors not using
> UTF-8 throughout systems that process user input) wouldn't be solved.

If we take that logic to its end, then it would not make sense for the 
validator to display an error when a page contains a form without being 
UTF-8 encoded, either. Because, after all, the backend/whatever could 
be non-UTF-8 based. The only way to solve that problem on those 
systems, would be to send form content as character entities. (However, 
then too the form based page should still be UTF-8 in the first place, 
in order to be able to take any content.)

[ Original letter continued: ]
>> Apart from UTF-16, Chrome seems quite aggressive w.r.t. encoding
>> detection. So it might still be an competitive advantage.
> It would be interesting to know what exactly Chrome does. Maybe
> someone who knows the code could enlighten us?

+1 (But their approach looks similar to the 'border on sane' approach 
you presented. Except that they seek to detect also non-UTF-8.)
Leif Halvard Silli
Received on Tuesday, 3 January 2012 14:34:31 UTC

This archive was generated by hypermail 2.3.1 : Monday, 13 April 2015 23:09:10 UTC