- From: fantasai <fantasai.lists@inkedblade.net>
- Date: Thu, 15 Nov 2007 16:43:14 -0500
- To: Addison Phillips <addison@yahoo-inc.com>
- CC: www-style@w3.org, 'WWW International' <www-international@w3.org>
Addison Phillips wrote: > > Hi Fantasai, > > Interestingly, this question came up in my review of XmlHttpRequest just > yesterday. I believe that what you want is: > > - You want to define it in terms of the Unicode definition. > > - You also probably want to define it in deterministic terms, rather > than allowing it to be language sensitive. This means *not* using > SpecialCasing.txt or language-specific tailorings (e.g. the > Turkish/Azerbaijani dotted/dotless i mappings). I'd be happy with that if [a-z] and [A-Z] matched each other and didn't match anything else. But it seems that's not the case in Unicode. > I would tend to say that otherwise you want case-insensitivity to apply > regardless of script (for all scripts that have a script distinction). > Or, to address your questions: > > fantasai wrote: >> >> Henri Sivonen brings up the point that ASCII case-insensitivity and >> Unicode case-insensitivity are not the same and that we should define >> what we want for CSS. For example, should WIDTH and WİDTH match? > > No, they shouldn't. > >> WİDTH and width? > > Hmm... probably these should. If WİDTH and width match, and width and WIDTH match, then WİDTH and WIDTH need to match. I personally don't think WİDTH or the dotless i should match the 'i' in CSS syntax. ~fantasai
Received on Thursday, 15 November 2007 21:43:52 UTC