Re: [css-fonts-3] i18n-ISSUE-295: U+ in unicode-range descriptor

John C Klensin wrote:

>>> 4.5. Character range: the unicode-range descriptor
>>> http://www.w3.org/TR/2013/WD-css-fonts-3-20130711/#unicode-ra
>>> nge-desc
>>> 
>>> 'Each <urange> value is a UNICODE-RANGE token made up of a
>>> "U+" or "u+"  prefix followed by a codepoint range'. The U+
>>> is not always needed  before every codepoint value (eg. in a
>>> range).
>>> 
>>> Why do we need the U+/u+ ?  It would be easier to just use
>>> bare hex  codepoints, especially for ranges, where U+ is only
>>> used at the start  anyway.
>> 
>> As Tab has already pointed out, the unicode range syntax was
>> part of CSS 2.1 syntax and the descriptor itself is already
>> supported by multiple implementations so it's not appropriate
>> to make a change like this at this point.
> 
> After thinking about this a bit more, there is another reason.
> U+[N[N]]NNNN rather clearly identifies a Unicode code point --
> independent of the particular encoding/representation -- in general
> practice.  By contrast, "0x...." and its syntactic equivalents takes
> us back into the question of whether it is a Unicode code point or,
> e.g., UTF-16 or hexified UTF-8.   So there is also a slight argument
> for U+.... on grounds of clarity and precision.

The existing unicode range syntax has been implemented in CSS since
2.1 as part of the tokenizer.  And both Webkit and IE support the
@font-face rule unicode-range descriptor syntax defined in the current
spec.  So at this point, regardless of the subtle advantages of one
syntax versus another, the point is moot I think. Unless we feel there
is a strong reason to break existing implementations, we need to live
we the current syntax.

Regards,

John Daggett

Received on Friday, 13 September 2013 12:58:21 UTC