W3C home > Mailing lists > Public > public-html@w3.org > January 2013

Re: The non-polyglot elephant in the room

From: Martin J. Dürst <duerst@it.aoyama.ac.jp>
Date: Mon, 21 Jan 2013 20:14:52 +0900
Message-ID: <50FD232C.40200@it.aoyama.ac.jp>
To: Henri Sivonen <hsivonen@iki.fi>
CC: public-html WG <public-html@w3.org>, "www-tag@w3.org List" <www-tag@w3.org>
On 2013/01/21 18:46, Henri Sivonen wrote:

> I am opposed to this working group encouraging polyglot markup or
> appearing to encourage polyglot markup, because I don't want to spend
> time at implementing something as useless as polyglot validation and I
> don't want to be explaining to a horde of designers why I don't if
> this polyglot stuff finds its way into an A List Apart article or
> similar.

Very clear explanation. But just a question: What would be the effort of 
checking for polyglot markup? I don't know the internal structure of 
your validator, but at least in some ideal implementation, "validates as 
polyglot" could just be defined as "validates as HTML" AND "validates as 
application/xhtml+xml". So even for implementing polyglot validation, we 
might not need a document describing polyglot markup :-).

The problems with the above simple plan that I managed to come up in the 
five minutes I wrote this mail are: a) although a document might be 
valid both ways, the DOMs wouldn't match; b) merging errors may be quite 
tricky (but maybe not necessary); and c) there may be additional user 
interface overhead (but it could be as simple as changing the HTML/XHTML 
choice from radio buttons to checkmarks.


> Also, I'd much rather see the development time of authoring
> tools such as BlueGriffon go into providing a better UI for authoring
> HTML instead of chasing polyglot markup.

Any specific ideas, or any specific pain points?

Regards,   Martin.
Received on Monday, 21 January 2013 11:15:27 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 21 January 2013 11:15:27 GMT