- From: Bless Terje <link@rito.no>
- Date: Wed, 2 Feb 2000 18:09:57 +0100
- To: "'Dan Connolly'" <connolly@w3.org>, "'W3C Validator'" <www-validator@w3.org>
>>You can't make XHTML the default for documents without a >>DOCTYPE; it'll break just about anything out there. > >Er... you mean it'll start complaining that everything out >there is broken, no? Yes. And this constitutes "broken" behaviour. >Was doctype-sniffing a documented feature of the validator? No. >If so, I think Gerald's idea makes sense: > "I'm assuming XHTML; if you don't want that, here's > info on adding an HTML doctype..." Not when it's hidden inside about a gazillion other messages. >If you're talking about backwards compatibility with HTML specs, >none was promised for documents with no <!DOCTYPE...>: Duh! Ya think? I'm talking about backwards compatibility with it's own behaviour and with what users will expect. >>The only way to handle this that won't break badly is to assume that >>text/xml is XML, text/xhtml is XHTML, text/html is HTML 4.01[0], >>unless a DOCTYPE is given in which case the DOCTYPE is used. > >Er... would you please support that claim with some evidence or >an argument? I find that XHTML served up as text/html works quite >nicely; e.g. You are living under the mistaken assumption that a particular browser's rendering has anything at all to do with a document's validity. text/html is the MIME type for the application of SGML that is known as "HTML" and XHTML can never be valid SGML AFAIK. The only reason it is percieved to "[work] quite nicely" is that most browsers out there aren't actually SGML processors. /me steps up onto the soapbox... The XHTML 1.0 Recommendation is riddled with this kind of thing. Instead of designing something with equivalent functionality, the XHTML Working Group has shoehorned HTML into XML. Round Peg, meet Square Hole! They also fail to take into account anything other then M$IE and Nutscrape in the definition of User Agent. After very successfully generating HTMl 4.0 Strict, they took a step back by saying it was ok to use Transitional; so nobody ever bothered to try for Strict (not even the W3C itself!). And another two steps back with this HTML-compatible-XHTML lunacy. The point was to move away from physical markup and towards structural markup; this was supposed to be XML's "Killer App". Instead they came up with the same ridiculous "Transitional" DTD concept and are actually stating explicitly that what matters is how a page looks in a particular set of browsers. Argh! Add to that the arbitrary choice of all lower case elements, which is the absolutely _worst_ choice they could have made. But the absolutely _worst_ bit they did was to come up with the ridiculous idea that XHTML should be served as text/html. Whoever came up with that bogosity should be promptly defenstrated before s/he can do any more damage! /me takes a deep breath and steps down again. >XHTML is the only HTML dialect where a <!DOCTYPE...> isn't required, >so it makes perfect sense to check for XHTML when you don't see one. It makes *no* sense because XHTML is not a "dialect" of HTML by any _meaningfull_ definition of the word. >> I was afraid this was due to bugs in my DOCTYPE guessing code, > >The whole idea of DOCTYPE guessing was pretty goofy, if you ask >me. It just seems to encourage folks to put documents on >the web that don't match the specs, and there's plenty of tools to >help you do that without adding the validator to the list ;-) This, OTOH, I agree with. That's why the sniffer code was scheduled to be taken out back so we could put a bullet through it's head! The replacement would be a DOCTYPE-override feature, possibly with an option to try to guess the DOCTYPE.
Received on Wednesday, 2 February 2000 12:09:47 UTC