- From: fantasai <fantasai.lists@inkedblade.net>
- Date: Tue, 21 May 2013 14:06:49 +0800
- To: www-style@w3.org
On 05/21/2013 12:48 PM, John Daggett wrote: > > fantasai wrote (regarding the @font-face rule unicode-range descriptor): > >> # Without any valid ranges, the descriptor is omitted. >> >> I disagree with this. If there aren't any valid ranges, >> the range should be the null set, not *everything*. > > I think there are a couple reasons why this isn't a good idea. I > spec'ed the behavior above because it's roughly equivalent to the > handling of properties with invalid values: > > font-style: italic; > font-style: whizzy; /* invalid value, font-style: italic used */ > > Ignoring a descriptor when the ranges aren't valid allows future > syntax to be added in a way that an author can also include descriptor > declarations for older user agents. This totally makes sense to me for the invalid ranges that we're considering parse errors. But there's a bunch of cases that are parsed, kept, but result in no valid range. Specifically, these cases: - "ranges that descend (e.g. U+400-32f) are invalid and omitted rather than treated as parse errors" - "Ranges are clipped to the domain of Unicode code points (currently 0 – 10FFFF inclusive); a range entirely outside the domain is omitted" If it's a parse error, sure, throw the entire declaration out. But I find it a problem to have unicode-range: U+0065, U+400-32f; /* results in range U+0065 */ unicode-range: U+400-32f; /* results in all of Unicode */ Removing a valid <urange> expands the range! I hope you understand why I think this makes no sense. :) ~fantasai
Received on Tuesday, 21 May 2013 06:07:20 UTC