- From: William F Hammond <hammond@csc.albany.edu>
- Date: Thu, 17 Apr 2008 00:41:00 -0400
- To: whatwg@whatwg.org, public-html@w3.org, www-math@w3.org, www-svg@w3.org
Previously: Yes, but the point is, once a user agent begins to sniff, there's no rational excuse for it not to recognize compliant xhtml+(mathml|svg). >> What obstacles to this exist? > > The Web. Really!?! And then: >>> The Web. >> >> Really!?! > > Yes, see for instance: > > http://lists.w3.org/Archives/Public/public-html/2007Aug/1248.html Taylor's comment is mainly about what happens when a user agent confuses tag soup with good xhtml. It is a different question how a user agent decides what it is looking at. Whether there is one mimetype or two, erroneous content will need handling. The experiment begun around 2001 of "punishing" bad documents in application/xhtml+xml seems to have led to that mime type not being much used. So user agents need to learn how to recognize the good and the bad in both mimetypes. Otherwise you have Gresham's Law: the bad documents will drive out the good. The logical way to go might be this: If it has a preamble beginning with "^<?xml " or a sensible xhtml DOCTYPE declaration or a first element "<html xmlns=...>", then handle it as xhtml unless and until it proves to be non-compliant xhtml (e.g, not well-formed xml, unquoted attributes, munged handling of xml namespaces, ...). At the point it proves to be bad xhtml reload it and treat it as "regular" html. So most bogus xhtml will then be 1 or 2 seconds slower than good xhtml. Astute content providers will notice that and then do something about it. It provides a feedback mechanism for making the web become better. -- Bill
Received on Thursday, 17 April 2008 04:41:35 UTC