W3C home > Mailing lists > Public > www-style@w3.org > May 2013

[css3-fonts] Error handling of unicode-range

From: Simon Sapin <simon.sapin@exyr.org>
Date: Tue, 21 May 2013 19:50:15 +0800
Message-ID: <519B5F77.4020901@exyr.org>
To: www-style <www-style@w3.org>

I like the general direction of todayís edits on unicode-range, but Iím 
still a bit confused by this paragraph:

> For interval ranges, the start and end codepoints must be valid
> Unicode values and the end codepoint must be greater than or equal to
> the start codepoint. Wildcard ranges specified with Ď?í that lack an
> initial digit (e.g. "U+???") are valid and treated as if there was a
> single 0 before the question marks (thus, "U+???" = "U+0???" =
> "U+0000-0FFF"). "U+??????" is not a syntax error, even though
> "U+0??????" would be. Wildcard ranges that extend beyond the end of
> valid codepoint values are clipped to the range of valid codepoint
> values. Ranges that do not conform to these restrictions are
> considered parse errors and the descriptor is omitted.

In particular, itís not clear what exactly is the error handling in 
various cases. As I understand it, there are two possible ways to handle 
some of the "bad" ranges, and "omitted" could mean either:

a. Drop the whole declaration. Other specs often say "invalid" for this, 
sometimes referencing one of these:


b. Consider that a given unicode-range token represents an empty range. 
The overall value of the descriptor being the union of all ranges, the 
empty range is neutral.

I think that changing the terminology to "invalid declaration" and 
"empty range" would help.

Simon Sapin
Received on Tuesday, 21 May 2013 11:50:48 UTC

This archive was generated by hypermail 2.4.0 : Friday, 25 March 2022 10:08:30 UTC