- From: Doug Schepers <doug@schepers.cc>
- Date: Fri, 1 Sep 2006 01:31:14 -0400
- To: "'L. David Baron'" <dbaron@dbaron.org>, <public-appformats@w3.org>, <www-forms@w3.org>
Hi, David- Thanks for your reply. L. David Baron wrote: | | I think this summary tries to condense three separate issues into one: | | 1. Should XHTML be used on the Web? | | 2. Should authors send XHTML content under the text/html MIME type? | | 3. When authors send XHTML content under the text/html MIME type, | should browsers treat it differently from other text/html? | | Trying to discuss these three issues as a single issue will | just lead to | confusion and misunderstanding. (They are related, however.) | | The document of Ian Hickson's that you cite [1] is a position on | question #2. I intended the summary to encapsulate the broad range of ideas that each "camp" holds, though I'm certain there are people who do not fall neatly into my summary. I'm happy to break the discussion down into whatever level of granularity will help us reach a position from which we can move forward. As you say, though, the issues are closely related, in that each is predicated on the previous one. My personal replies are: 1) Yes. 2) Only with content negotiation (or some other scheme) that allows modern browsers like Moz, Opera, Safari, etc. to properly determine what the format is so they can treat it like a first class citizen, while still allowing IE to display it as best it can. (*) 3) As a last resort, if the browser is capable of determining that the document is falsely labeled (as much legacy content will be), and that it is well-formed and valid XHTML (which Ian's study suggests may not be very common), I think that it is a very sane approach for the browser to be allowed to treat it like what it is. By the same token (no pun intended), if it claims to be XHTML but is not well-formed and/or valid, I think that the browser should be allowed to recover gracefully by treating it as HTML, with a prominent warning (possibly only in "developer" mode) that there was a problem treating it as XHTML. This would cover those cases where developers or tools simply made mistakes that should be corrected, while not penalizing users that encounter uncorrected documents. I have heard the claim that this is an untenable position, and that the increased parsing time would be perceptible to the user. I'm not convinced that this is the case. A parser could branch very early on if it detected an incorrectly-served XHTML document, and attempt to parse it as XHTML; if an error is encountered, it could quickly switch to HTML/tagsoup mode, retaining the existing parsed output if desired to speed up the remaining parsing. (I know that the UA is supposed to roll over and die when it encounters an error, but surely this is a more reasonable and flexible approach.) I'm curious to hear your own (or Mozilla's) answers, as well. | The HTML working group answered question #3 in [2] (answer: no), | although it was unanswered in the original XHTML1 recommendation. I | think this was a mistake (although I didn't feel as strongly | about it at the time). I wholeheartedly agree with you. That was an overly rigid judgement in a transitional period that should have been geared toward flexibility. Given the benefit of hindsight, I think that decision should be reexamined and brought into line with the current needs of the implementors, the developer public, and the market. | [ I trimmed www-archive from the recipient list; www-archive exists to | archive messages not sent to other lists, so there's no point cc:ing | it. ] Thanks, sorry if I missed a point of protocol. My intent was to make this a broader discussion than may be covered on these 2 lists alone. Regards- Doug * Hey, IE, please wake up and smell the XHTML!
Received on Friday, 1 September 2006 05:31:29 UTC