W3C home > Mailing lists > Public > public-html@w3.org > April 2008

Re: text/html for html and xhtml

From: j.j. <moz@jeka.info>
Date: Thu, 17 Apr 2008 08:38:17 +0200
Message-ID: <20080417083817.4m817lywr48gkk0w@www.hosting-agency.de>
To: William F Hammond <hammond@csc.albany.edu>
Cc: whatwg@whatwg.org, public-html@w3.org, www-math@w3.org, www-svg@w3.org

William F Hammond <hammond@csc.albany.edu> hodd gsachd:

> The logical way to go might be this:
>
> If it has a preamble beginning with "^<?xml " or a sensible
> xhtml DOCTYPE declaration or a first element "<html xmlns=...>",
> then handle it as xhtml unless and until it proves to be non-compliant
> xhtml (e.g, not well-formed xml, unquoted attributes, munged handling
> of xml namespaces, ...).

So your assumption is such markup indicates a significant likeliness
of well-coded and well maintained (x)html files? Thats wrong. Such
markurp is exported from office software, generated by PHP scripts,
created by authoring tools and past & copied around the web.

Any tiny code change or typo could change parsing completely in some
new UAs, perhaps without the autor's notice. If the author notes, s/he
perhaps hadn't any clue what's going on, besides "this new browser is
rubbish".

> At the point it proves to be bad xhtml reload
> it and treat it as "regular" html.

This opens a can of worms (scripts are executed before...)

> So most bogus xhtml will then be 1 or 2 seconds slower than good xhtml.

Causes global warming  :-)

> Astute content providers will notice that and then do something about it.
> It provides a feedback mechanism for making the web become better.

Less astute content providers will suggest their users using other UAs
or disabling hidden user prefs.

j.j.
Received on Thursday, 17 April 2008 06:39:13 UTC

This archive was generated by hypermail 2.3.1 : Monday, 29 September 2014 09:38:54 UTC