- From: Ambrose Li <ambrose.li@gmail.com>
- Date: Tue, 3 Feb 2009 04:39:14 -0500
- To: Henri Sivonen <hsivonen@iki.fi>
- Cc: "Phillips, Addison" <addison@amazon.com>, "L. David Baron" <dbaron@dbaron.org>, Boris Zbarsky <bzbarsky@mit.edu>, "public-i18n-core@w3.org" <public-i18n-core@w3.org>, "www-style@w3.org" <www-style@w3.org>
2009/2/3 Henri Sivonen <hsivonen@iki.fi>: > To me, it seems unreasonable to introduce serious performance-sensitive > complexity into Web content consumers to address the case that a Web > developer fails to supply HTML, CSS and JS in a *consistent* form in terms > of combining characters. (I think even normalization in the HTML parser > post-entity expansion would be undesirable.) How big a problem is it in > practice that an author fails to be self-consistent when writing class names > to .html and when writing them to .css or .js? But you can't assume that HTML, CSS, and JS are written by the same person, let alone all HTML, CSS, and JS in a whole site. They might not even be written by the same team (if the concept of "team" applies even), or people in the same company (if that applies). > In my opinion the most reasonable way for browsers to deal with > normalization of identifiers is not to normalize before performing a string > equality comparison. And then it's up to authors to be normalization-wise > *self*-consistent in their HTML, CSS and JS. This sounds like a recipe for trouble to me. -- cheers, -ambrose The 'net used to be run by smart people; now many sites are run by idiots. So SAD... (Sites that do spam filtering on mails sent to the abuse contact need to be cut off the net...)
Received on Tuesday, 3 February 2009 09:39:54 UTC