Re: XHTML Applications and XML Processors [was Re: xhtml 2.0 noscript]

Let me try some analysis:

When developing standards, we always consider the customer (our 
constituents).  In the case of (X)HTML, there are at least 3 classes of 
customer (in descending order of importance); consumers of content, 
authors of content, and user agent developers.  When you are developing 
a standard (or anything, I suppose) you need to consider the relative 
importance of all your constituents.  If, for example, a minor 
inconvenience to the smallest class results in substantial benefit for 
the largest class, you do that.  You apologize, but you do it. 

If I understand the points raised by Bjoern, Jim, et. al. correctly, 
they are implying that inconvenience to the content author class will 
greatly improve the experience of the consumer class. And, since the 
author class is dwarfed by the consumer class, they claim this is a 
reasonable trade off.  However, when analyzing that trade off you need 
to look not just at the economics of it, but also at the practical 
implications.  In this case. Mark has argued that NOT defining a 
notional processing model means we are damaging interoperability, 
thereby harming some percentage of our largest class of customers.  
Moreover, that it is impossible for a content author today to know what 
will and will not work portably, and it is the job of standards to 
define that.


I strongly agree that it is the job of software standards to define 
rules that permit the development of portable software (in this case, 
web pages).  In general, the easiest way to achieve this is through a 
least common denominator approach - this method is guaranteed to work 
everywhere.  "If you want your software to work everywhere, do it this 
way (see ANSI C, for example)."  In the case of XHTML documents, this 
least common denominator approach would be defining the notional 
processing model and telling content authors they may rely upon it.  
Another way is to clearly delineate the things that will work all the 
time from the things that might work, and put large red boxes around the 
later.  In standards, you selectively say things like "The behavior of 
this feature when you use it in this bizarre way is unspecified" (or 
undefined, or implementation defined, or whatever).


It is possible that we could take this selective approach with 
handlers.  There are some DOM methods that just won't work right before 
the whole document is loaded, or that will give differing, pseudo random 
results before the whole document is loaded.  We *could* identify those 
and put large red boxes around them. Not in XHTML 2, mind you.  XHTML 2 
doesn't define the DOM nor care about it.  In XML Events (a component of 
XHTML 2) we could say that attempting to register a handler on an 
element before that element is available in the DOM has unspecified 
behavior (e.g., Don't Do This).  But I don't know that this approach 
would address the main problem:


The problem today is that, as a content developer, I have to guess.  
That's not acceptable.  Software portability is about determinism.  I 
have to *know* my software is going to work. at least in the 
environments I say it will work.  As a consumer, you have to be 
confident that the software will work.  As a user agent developer, I 
have to know what behavior I am required to support, and what behavior I 
have some wiggle room on.  The job of the standards community is to 
provide a level of determinism.  Let's try to keep that goal in mind?

-- 
Shane P. McCarron                          Phone: +1 763 786-8160 x120
Managing Director                            Fax: +1 763 786-8180
ApTest Minnesota                            Inet: shane@aptest.com

Received on Thursday, 3 August 2006 14:44:21 UTC