- From: John Daggett <jdaggett@mozilla.com>
- Date: Thu, 12 Sep 2013 18:53:52 -0700 (PDT)
- To: Richard Ishida <ishida@w3.org>
- Cc: W3C Style <www-style@w3.org>, www International <www-international@w3.org>
Richard Ishida wrote: > 4.5. Character range: the unicode-range descriptor > http://www.w3.org/TR/2013/WD-css-fonts-3-20130711/#unicode-range-desc > > "Valid Unicode codepoint values vary between 0 and 10FFFF > inclusive." Do we need to say something about characters that cannot > be used, such as surrogate codepoints? > > Perhaps what is meant is that the codepoint values cannot be higher > than 10FFFF or lower than 0. In this case, perhaps the spec should > say that the codepoint space (range) is between 0 and 10FFFF, rather > than give the impression that all values in that space are > acceptable. Hmm, unicode ranges are used to indicate *possible* coverage ranges for fonts. The actual range used in font matching is ultimately determined by the intersection of the unicode-range descriptor value with the actual character map of the font. There's no attempt to separate actual "valid" Unicode values from ones that are invalid. I don't think I see a need here to discuss the nitty gritty of surrogate handling. Cheers, John Daggett
Received on Friday, 13 September 2013 01:54:23 UTC