[css3-fonts] Error handling of unicode-range

Hi,

I like the general direction of today’s edits on unicode-range, but I’m 
still a bit confused by this paragraph:

> For interval ranges, the start and end codepoints must be valid
> Unicode values and the end codepoint must be greater than or equal to
> the start codepoint. Wildcard ranges specified with ‘?’ that lack an
> initial digit (e.g. "U+???") are valid and treated as if there was a
> single 0 before the question marks (thus, "U+???" = "U+0???" =
> "U+0000-0FFF"). "U+??????" is not a syntax error, even though
> "U+0??????" would be. Wildcard ranges that extend beyond the end of
> valid codepoint values are clipped to the range of valid codepoint
> values. Ranges that do not conform to these restrictions are
> considered parse errors and the descriptor is omitted.

In particular, it’s not clear what exactly is the error handling in 
various cases. As I understand it, there are two possible ways to handle 
some of the "bad" ranges, and "omitted" could mean either:

a. Drop the whole declaration. Other specs often say "invalid" for this, 
sometimes referencing one of these:

http://www.w3.org/TR/CSS21/syndata.html#illegalvalues
http://www.w3.org/TR/CSS21/conform.html#ignore

b. Consider that a given unicode-range token represents an empty range. 
The overall value of the descriptor being the union of all ranges, the 
empty range is neutral.

I think that changing the terminology to "invalid declaration" and 
"empty range" would help.

-- 
Simon Sapin

Received on Tuesday, 21 May 2013 11:50:48 UTC