Re: XHTML Applications and XML Processors [was Re: xhtml 2.0 noscript]

Hi Bjoern,

On 29/07/06, Bjoern Hoehrmann <derhoermi@gmx.net> wrote:
> * Mark Birbeck wrote:
> >I was using the term 'load' in a general sense--I don't think I talked
> >about the onload event as such. To be more precise I'd see the process
> >*conceptually* as follows:
> >
> >* the XML processor passes a complete DOM to the XHTML processor;
>
> XML processors do not know about "DOM", they just report information
> that a higher level application can use to provide objects and their
> interfaces to some yet higher level application.

That's true, and I didn't really mean to use the term DOM.

But for some reason you are choosing to ignore my main point, which is
that for interoperability purposes, the processing model should be the
same regardless of whether a SAX-style approach is used, a DOM is
used, or some other model is used.

That's why I said that *conceptually* the processing model must act as
if the entire XML source has been parsed. The fact that you could use
SAX to optimise certain tasks--such as retrieving images before the
entire document had been processed--doesn't change this key aspect of
the processing model, since the result of retrieving the images and
caching them during parsing or at the end would be exactly the same.

But the effect of the two types of loading approach (DOM and SAX) is
*not* the same if you allow the content of elements like <script> to
be processed before the entire document is *conceptually* loaded. For
this reason we should pin down when these things happen.


> >* the XHTML processor starts using this DOM:
> >   * attaching event handlers from XML Events;
> >   * running any inline scripts;
> >   * loading any images and other resources;
> >   * and so on;
> >* the 'onload' event is then dispatched, right at the end.
>
> You've picked a processing model and want the world to adapt it. That
> won't work, you have to negotiate a processing model with the world.

You're missinig the key point again; the processing model I am
describing is not just some personal preference I have which I am
egotistically trying to impose on the world, (what a pointless
assertion, Bjoern), it's a model that is based on a desire to achieve
interoperability.

If someone can show that there are other ways to achieve
interoperability which actually supports allowing <script> elements to
be aribitrarily processed at any point in the load sequence then I
would of course be happy to agree with it--I have no axe to gring one
way or the other.

But I've looked at this issue a lot, and so far I've been unable to
see how interoperability can be *guaranteed* without defining at what
point the <script> elements can be processed, and that's probably in
the load event. The only other alternative is to actually impose the
use of a SAX-style processing model, and actually define that script
MUST be processed the moment the element is fully loaded; that runs
counter to some basic principles that don't impose any particular
processing model for the XML processor.


> >Of course it's easy enough to implement such a thing, but the point is
> >that by doing so we've made the behaviour of a SAX-based and a
> >non--SAX-based processor effectively the same. In short, I think we'd
> >need to define things in this 'conceptual' way so that we get
> >interoperability, even if implementers add optimisations.
>
> Users want web applications to respond to their actions before they
> have been fully loaded. So you need to run scripts before the document
> has been fully loaded. And they will change the document. And it will
> depend on how much of the document has been loaded how these scripts
> behave. Which implies that you cannot make these two modles behave in
> exactly the same way. And therefore your model is not applicable in the
> general case. Responsive web applications are more important than the
> "interoperability" in edge cases that you desire.

I'm afraid it might now be you who is trying to impose your world view
on the rest of us; interoperability still has one or two supporters
around the W3C, and as it happens, it is possible to define
interoperable, responsive, web applications.


> >Let's put it the other way round; if I were to implement an XHTML
> >viewer by first loading the XML into a DOM and validating it, before
> >passing it through to a renderer, what is wrong with that? The problem
> >is that if we allow a processor to *choose* to run some of the script
> >before the document is completely available (conceptually), then we
> >rule out this 'naive' approach.
>
> There is no ruling out. They will not behave in the same way, just
> like if you implement progressive processing, you might do it with
> a 4k buffer, or a 16k buffer, or a 2MB buffer, in each case you will
> be able to construct cases where the behavior is different. This is
> how it works in most HTML user agents today, few people have any
> problem with that.

Are you kidding? The amount of work that is going into creating
cross-browser 'wrappers' is unprecedented! There are hundreds of
so-called 'Ajax libraries' that stand and fall on how much of the
browser mess they can hide.

And this is not just hiding methods and properties, or tidying up CSS
inconsistencies. Many libraries support things like registering for
events on elements that haven't yet been loaded. This means that I can
write an addEventListener() call at the beginning of my document that
refers to an element at the end of the document, and the Ajax library
stores the reference and then keeps retrying, using a timer. (I know
that YUI and Dojo do this, and I'm sure others do too.) In my mind
that's actually a hack, but whatever you think of it, the important
point is that it produces *interoperability*, since by using this
technique you can now be sure that your event registration will
happen, wherever in your document you place the call, and regardless
of the processing model used by the browser.

But even more importantly, this technique is only necessary in an HTML
document; in an XHTML document, as I have tried to argue, the XML
processing model means that we need to assume that *conceptually* the
full XML document is available to us. And happily, that also gives us
the interoperability we need.

Regards,

Mark

-- 
Mark Birbeck
CEO
x-port.net Ltd.

e: Mark.Birbeck@x-port.net
t: +44 (0) 20 7689 9232
w: http://www.formsPlayer.com/
b: http://internet-apps.blogspot.com/

Download our XForms processor from
http://www.formsPlayer.com/

Received on Wednesday, 2 August 2006 16:51:03 UTC