Re: Verbosity of XHTML 2 (was Re: XHTML 2.0 and Xlinks (again))

Hi all,

Am Sonntag, 11. August 2002 16:18 schrieb Masayasu Ishikawa:
[cut]

I understand.

> This is the cost of being architecturally consistent and robust.
> I wish XHTML 2 to remain simple, and I fear this level of verbosity
> is enough to blow away all the benefits of XHTML 2 for ordinary authors.
> I'm willing to be told that my worries are needless ...

Short:
I suggest two solutions.
Either: Require XHTML 2.0 documents to be validated,
Or: simply don't care

Long:

Why not simply require XHTML 2.0 user agents to be validating?
The conformance declaration might say:
"Validation (Schema or DTD) is required (must) if the user agent detects that the document is XHTML 2.0 or later.
Invalid documents must not be rendered by conforming user agents but must be rejected in the first place.
User agents might render a rejected document in the second place on explicit user request as a feature, but the implementation of such a feature is not required and definitely discouraged."
The above quote is free from patents ;-), use it, modify it, pipe it to /dev/null, /dev/hda (might destroy your data) or /docs/xhtml2.0.spec if you want to.

This would solve:
1. The namespace problem
And, what's even more important, at least to me:
2. Problems with ugly invalid HTML

The problem that the ordinary author doesn't write valid HTML is that the user agents ordinary authors use do not validate.
They try their best in eliminating every single of the author's mistakes. They are fault tolerant.
But an error that displays in a "correct" way on one user agent might be rendered "faulty" or even not at all on another.
(I know you know that, I just want to tell it from my point of view)


Validation should not be a traffic problem since fetching the DTD is not required if a user agent has a catalog mapping the public identifiers (of the files that make up the external subset of the XHTML 2.0 document type definition) to its own copy of the entity.

As far as I understand XHTML 2.0 isn't intended for small devices anyway, that's why XHTML Basic was simply named XHTML Basic, not "XHTML Basic 1.0" (forgive me, it's in quotes, it's in quotes!! ;-)
At least that's what one of the editors explained to me.

So it won't be a category "why are you wasting my precious memory" problem.
And validation is quite a fast process on user agents (I think) XHTML 2.0 is intended for.
Validation can be done in parallel with loading the resource and displaying it.
Of course that way parts of the resource will be rendered until the eventual validity error will be detected, but that should be allowed.
After validity error detection the user agent should withdraw rendering and display an error message instead.
Some user agents might have a "please show me the document despite its validity errors" button.
This button's function must definitely not be default.

For another side effect this would force Microsoft to finally make their XML parser XML conformant ;-)


So user agents could be required by XHTML 2.0 to include the external subset or schema, for traffic reduction.
Of course someone might ask what must happen with updates to the external subset or schema.

Internet Explorer users already regularly get either updates or virusses ;-)
And Opera and Mozilla surely will allow users to modify the catalog themselves.


And hey, we don't need character entities anymore since even vi improved is Unicode / UTF-8 capable and has no problems with chinese characters or even rtl writing ;-)
Last but not least I do not care about the document structure requiring some namespace declarations.
Is the id declaration problem really new to XHTML 2.0? I'm tempted to believe that's inherited from XHTML 1.1 and that it's also a problem if an XML/XLink wants to refer to an XHTML 1.1 problem, or that it's a generic XLink / XPointer / Id problem, not a XHTML specific one.

>   <?xml version="1.0" encoding="UTF-8"?>
>   <?xml-stylesheet type="text/css" href="xhtml2.css"?>
>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 2.0//EN" "xhtml2.dtd"[
>   <!ATTLIST html
>       xmlns:xsi CDATA #FIXED "http://www.w3.org/2001/XMLSchema-instance"
>       xsi:schemaLocation CDATA #IMPLIED
>
>   <!-- loooong list of redundant ATTLIST declarations for ID
>        ...
>   -->
>   ]>
>   <html xmlns="http://www.w3.org/2002/06/xhtml2" xml:lang="en"
>         xmlns:ev="http://www.w3.org/2001/xml-events"
>         xmlns:xfm="http://www.w3.org/2002/01/xforms"
>         xmlns:xlink="http://www.w3.org/1999/xlink"
>         xmlns:svg="http://www.w3.org/2000/svg"
>         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>         xsi:schemaLocation="http://www.w3.org/2002/06/xhtml2 xhtml2.xsd
> ..."> ...
>   </html>

I wouldn't mind. I'm not deep enough into Schema so I can't tell if Schema would reduce the namespace declarations.
But anyway, there's copy/paste and there's good editors.
With vi improved I'd get this document sceleton for all time defining an abbreviation once in my .vimrc file.
And since mouse pushers / mouse shovers (sorry I don't know the exact translation for the German term Mausschubser) are usually at the very least as keen of their editors as I am of vim, their editors must be good and provide similar mechanisms (I know e.g. HomeSite does).

What I want to say is, if it reduces the average traffic (and with using e.g. SVG it does), why not blow up the document structure, if validation will not be required by the user agent.
I think, not in general but in this case, a bigger document structure is an editor tool problem.


"Ordinary author.
But Hagret, there must be a mistake.
It's ordinary author.
There's no such thing, is there?"
(Free adopted after Harry Potter and the Philosopher's Stone)


Greetings and have fun
-- 
Christian Wolfgang Hujer
Geschäftsführender Gesellschafter
ITCQIS GmbH
Telefon: +49 (089) 27 37 04 37
Telefax: +49 (089) 27 37 04 39
E-Mail: mailto:Christian.Hujer@itcqis.com
WWW: http://www.itcqis.com/

Received on Sunday, 11 August 2002 15:32:47 UTC