- From: Simon Sapin <simon.sapin@kozea.fr>
- Date: Thu, 24 Jan 2013 20:36:57 +0100
- To: "Tab Atkins Jr." <jackalmage@gmail.com>
- CC: www-style list <www-style@w3.org>
Le 24/01/2013 19:39, Tab Atkins Jr. a écrit : > On Thu, Jan 24, 2013 at 7:49 AM, Simon Sapin<simon.sapin@kozea.fr> wrote: >> Apparently at-keywords, function names and dimension units are normalized to >> lower-case in CSSOM serialization. Should that normalization happen as early >> as in the tokenizer? >> >> Until tokens or "primitives" are exposed in some API this might only be an >> implementation concern and irrelevant to the spec. I’m not sure. > Yeah, I'm not sure either. I've added an issue to the spec to keep it > in mind, though. > > My thought is that we shouldn't worry about it in the parser, though. > While some tokens are*definitely* always language-defined, others are > only language-defined in certain contexts (idents), and I think it > would be weird to expose a token stream that was only partially > lowercased. > > And even my assertion that some tokens are definitely always > language-defined is incorrect, as the value of a custom property is > completely author-defined, and may never be used in an actual CSS > value. An author could, for example, put SVG path data into a custom > property, for consumption by a script that uses it to do something to > the element, and casing is important there. So really,*every* token > is potentially context-sensitively case-sensitive. Ok. Both reasons sound good enough to not do this. -- Simon Sapin
Received on Thursday, 24 January 2013 19:37:22 UTC