- From: Ian Hickson <ian@hixie.ch>
- Date: Wed, 27 Jun 2001 15:59:28 -0700 (Pacific Daylight Time)
- To: "William F. Hammond" <hammond@csc.albany.edu>
- cc: <www-talk@w3.org>
On Wed, 27 Jun 2001, William F. Hammond wrote: > > [...] > My understanding is that the overlap is provided for the purpose of > making it possible for content providers to bring up XHTML documents > that are usable -- at least to some extent -- in old user agents. As I pointed out in my old post, this is not a requirement. IMHO, before writing XHTML, UAs should support XHTML. Would any web author bother to use PNGs if UAs didn't support PNGs? What's the advantage of writing documents that claim to be XHTML if all the UAs simply treat them as tag soup anyway? Let's get our priorities right: First, UAs should support the new specs. Then, once UAs are widely distributed, authors can begin to use XHTML. Did people use CSS before UAs supported CSS? > If the web is to move beyond tag soup in a smooth way, I think it > clear that text/html should be the primary content-type for all XML > document types that are extensions of the historic HTML language and > that have been prepared to degrade to tag soup. This is where we disagree. I think if the web is to move beyond tag soup in a smooth way, we should wait until the majority of the web population is ready to accept text/xml markup. (Preferably with client side schema validation so that markup is even more restricted than just well-formed- ness for known vocabularies like XHTML and MathML.) After all, what's the rush? > It would be outrageous for a new XHTML-capable user agent to deny > content providers the reward for this effort What reward? Any document that complies to Appendix C will be rendered identically whether the UA uses a tag soup model or an XML model. > The writers of XHTML capable user agents need to understand the not > very complicated subtleties of document prolog construction that arise > with XHTML in order to be able to smoothly service old and new. Could you expand on this? > This is a not run time performance hit. In my experience working on Mozilla's performance, where every millisecond is examined, everything is a runtime performance hit. Mozilla is too slow already. It won't be slowed further. > If Amaya can do it, then the big guys can do it, too. Amaya can't do it. Try browsing to this valid HTML 4 document: http://www.damowmow.com/mozilla/html-not-xml.html No luck? Try this well-formed XML document: http://www.damowmow.com/mozilla/xml-not-xhtml.xml Mozilla renders both of those correctly. (Well, almost. There are a few CSS errors in the second one if you resize the window.) > Ian wrote in reply to Arjun: > >>> The idea that non-geeks should respect geeky niceties is Canutism at >>> its worst. "Zero tolerance" is one thing if end-users can be made to >>> expect it; it's another when precisely the opposite is the >>> expectation being sold to the public. >> >> I would tend to agree with this. I don't think we (the W3C and its >> community) should be bothering to promote "compatability" of XHTML and >> Tag Soup. Here is how I think it should work: > > XHTML and tag soup are very different. The point, however, is that > there is an easy way for most XHTML, strictly conforming or not, to be > prepared so that it qualifies both as XHTML in a new user agent and as > tag soup in an old user agent. Correct. And any document which is prepared in this way (conforming to Appendix C of XHTML 1) will render the same whether treated as tag soup or XHTML, indeed that's the whole point. > Check out Amaya, which yells about XHTML but not about tag soup. Or Netscape 6.1 PR1, which treats an Appendix-C conformant document correctly, and treats all XHTML sent as text/xml correctly. (Modulo a dozen or so known minor bugs and the fact that it ignores the silly parts of the conformance reqs, see my recent post about this.) > ... but Amaya will yell about problems in XHTML, regardless of mime > type. How does it know it's XHTML? It's a heuristic, and one which is suboptimal, since it fails on the page I mentioned above: http://www.damowmow.com/mozilla/html-not-xml.html >> 4. Document authors use XHTML (text/xml). >> Step 4 is in the future. > > Step 4 is realized by Amaya, which handles XHTML either as text/html > or as text/xml though there is no justification in any XHTML-related > specification for the serving of XHTML as text/xml. Still it would > appear to be justified by RFC 3023. XHTML is XML. XML may be sent as text/xml. What more justification does one need? > Why not bring Mozilla up to speed? Mozilla supports XHTML per the specs (modulo minor bugs, see above). >> I fail to understand the point of that. > > It's a service for content providers. It makes it possible for a > huge web of documents to be moved slowly from the old world to the > new world without having to worry about whether readers have old > user agents or new user agents. IMHO there is no reason to switch away from tag soup right now. UAs don't support XHTML. Like I said in my previous post, changing the docs before the UAs is like trying to run before you walk. Let's do things in the right order, and everything will work out. What's the rush? > The Mozilla 0.9.1 behavior forces content-providers either to keep > dual archives or else, in serving their sleek new XHTML as text/html, > to give up the benefit of new handling in Mozilla. What benefit? > Worse than that, if they do not keep dual archives and if they are not > validating, they won't really know if it "works" until they're in deep > trouble. That's the point I brought up (see quoted section below) as a reason NOT to use text/html for XML-based content! This will happen regardless of whether Mozilla or Amaya support XHTML-as-text/html, since they have negligible market share. "It works with IE, it must be ok." Next thing you know, Microsoft are claiming they can't support the XML well-formedness constraints because "it would break legacy XHTML content". >> All I see are many reason not to do it, the primary one being that it >> will cause XHTML UAs to have to be backwards compatible with a >> premature Step 4's supposedly-XHTML content which works in today's >> browsers... otherwise known as Tag Soup. Welcome back to Step 1. > > No, the new user agent needs, like Amaya, to make a quick early decision > about which way to go. No-one has yet suggested such a mechanism that I have not shown is flawed except for one, the embedded-magical-comment. I am currently pursuing this idea, although it has met a little resistance. > As I've said before, the W3C HTML WG could give user agent writers a > bit more help in deciding how to proceed here. The HTML WG are busy creating a significantly better ML which will be incompatible with XHTML1 and will therefore solve this problem rather neatly, since there won't be any way of being backwards compatible. I strongly approve of this. :-) -- Ian Hickson )\ _. - ._.) fL Invited Expert, CSS Working Group /. `- ' ( `--' The views expressed in this message are strictly `- , ) - > ) \ personal and not those of Netscape or Mozilla. ________ (.' \) (.' -' ______
Received on Wednesday, 27 June 2001 19:00:53 UTC