- From: Tab Atkins Jr. <jackalmage@gmail.com>
- Date: Thu, 24 Jan 2013 10:39:15 -0800
- To: Simon Sapin <simon.sapin@kozea.fr>
- Cc: www-style list <www-style@w3.org>
On Thu, Jan 24, 2013 at 7:49 AM, Simon Sapin <simon.sapin@kozea.fr> wrote: > Apparently at-keywords, function names and dimension units are normalized to > lower-case in CSSOM serialization. Should that normalization happen as early > as in the tokenizer? > > Until tokens or "primitives" are exposed in some API this might only be an > implementation concern and irrelevant to the spec. I’m not sure. Yeah, I'm not sure either. I've added an issue to the spec to keep it in mind, though. My thought is that we shouldn't worry about it in the parser, though. While some tokens are *definitely* always language-defined, others are only language-defined in certain contexts (idents), and I think it would be weird to expose a token stream that was only partially lowercased. And even my assertion that some tokens are definitely always language-defined is incorrect, as the value of a custom property is completely author-defined, and may never be used in an actual CSS value. An author could, for example, put SVG path data into a custom property, for consumption by a script that uses it to do something to the element, and casing is important there. So really, *every* token is potentially context-sensitively case-sensitive. ~TJ
Received on Thursday, 24 January 2013 18:40:02 UTC