- From: John Daggett <jdaggett@mozilla.com>
- Date: Mon, 16 Sep 2013 18:55:46 -0700 (PDT)
- To: Anne van Kesteren <annevk@annevk.nl>
- Cc: Addison Phillips <addison@lab126.com>, Richard Ishida <ishida@w3.org>, W3C Style <www-style@w3.org>, www International <www-international@w3.org>
After re-reading all the posts on this issue, at this point I don't think I see an issue that requires further consideration. The use of "valid Unicode codepoint" has been removed from the description of 'unicode-range' in the editor's draft. In particular, I think Anne's point about surrogate handling [1] is completely orthogonal to the behavior of unicode-range: > It seems weird to say it expresses a range of Unicode scalar values > and then include U+D800 to U+DFFF in that range. And let's not use > "characters" as that's a confusing term. Saying that the range is in > code points but U+D800 to U+DFFF are ignored (rather than treated as > an error) could make sense. Non-Unicode encoding and surrogate handling issues are dealt with in levels above the level where font matching occurs. If you look carefully at the description of font matching, the range of codepoints defined by the 'unicode-range' descriptor is intersected with the underlying character map of the font. *That* is what defines the exact set of codepoints that are matched as part of the font matching algorithm. Given that no font ever includes mappings for surrogate codepoints to glyphs and no layout engine ever treats lone surrogates as individual codepoints, I don't see the need to adjust the definition of 'unicode-range'. Invalid codepoints like this will naturally be ignored given the existing definition of font matching. Regards, John Daggett [1] http://lists.w3.org/Archives/Public/www-style/2013Sep/0318.html
Received on Tuesday, 17 September 2013 01:56:19 UTC