- From: Henri Sivonen <hsivonen@iki.fi>
- Date: Mon, 13 Oct 2003 12:30:34 +0300
- To: www-style@w3.org
On Thursday, Oct 9, 2003, at 20:03 Europe/Helsinki, Ian Hickson wrote: > On Wed, 8 Oct 2003, Paul Grosso wrote: >> [...] I object to saying that it is non-compliant to the CSS spec for >> any process--especially one such as an authoring tool or other special >> processor--to base selector action on DTD defaults. Certainly, there >> are many XML processes that read the DTD when creating the document >> tree, >> and the current wording makes these tools non-complaint with CSS2.1. Also I think that requiring different treatment of an attribute depending on whether it was defaulted is a bad idea. In general, I think treating XML differently depending on syntactic details that aren't supposed to have semantic significance is a bad idea. That is, in practical terms, if an application needs to know more than is exposed via the SAX ContentHandler interface (excluding the qName of elements), the design of the application is highly likely flawed in some way. > Interoperability is _the_ primary goal of CSS2.1. As we cannot require > all > XML processors to be validating parsers, we cannot rely on information > within the DTD for selector matching, and in order to have > interoperability, we must therefore require that all UAs _ignore_ such > information. You could state that CSS UAs which are Web browsers should not process external entities when parsing XML. The interoperability problems that stem from external DTDs aren't limited to attribute defaulting. OTOH, things declared in the internal DTD subset aren't interoperability problems to begin with. > The requirement that the processor be able to tell if the attribute was > defaulted or not is already made by DOM, and was therefore not > considered > to be a new requirement. [1] The DOM can expose things that really shouldn't matter to most applications. The DOM can make a difference between CDATA sections and normal text nodes, can expose entity references, can expose comments and can expose the literal text of the internal subset. Still, having selectors match on CDATAness, comment adjacency or doctype would be bad ideas. Besides, isn't static CSS rendering supposed to be implementable without the W3C DOM? > The allowance that processors not have to read DTDs was made by XML, > and > is therefore also not new. [2] Is there anything that prevents the CSS WG from recommending that Web browsers opt not to read the external DTD subset? > Finally, XHTML already requires that content be written in such a way > that > the resulting DOM be equivalent whether or not the document is read > via a > validating or non-validating parser (namely, the #FIXED attribute on > the > root element is required to be in the document despite being #FIXED), > and > thus we felt there was precedent for this decision. [3] If the DOM trees are required to be equivalent in any case, surely then the CSS spec doesn't need to say anything about it. :-) For XHTML family documents whose DTD is based on the Modularization of XHTML, the DOM trees are equivalent only if the xmlns attributes are omitted in the DOM or explicitly specified for every element. The XHTML DTD modules default the xmlns attribute on every element. This means that it is unpractical to make an XHTML 1.1 document standalone (as specified in the XML spec). Infoset augmentation is problematic. I think using Relax NG over DTDs is the right way to go for XHTML 2. > All three of the above-referenced specifications are already in REC > stage. Making requirements *in a rendering spec* about what an XML processor should expose seems to me to be on track to end up in the similar class of requirements you found "pathetic" and "inappropriate" about one of those RECs. See http://lists.w3.org/Archives/Public/www-talk/2001MayJun/0141.html -- Henri Sivonen hsivonen@iki.fi http://www.iki.fi/hsivonen/
Received on Tuesday, 14 October 2003 00:53:56 UTC