RE: XHTML Applications and XML Processors [was Re: xhtml 2.0 noscript]

Bjoern Hoehrmann wrote:
> 
> You have a XHTML document like
> 
>   ...
>   <script>...</script>
>   ...
>   <body>
>     <p>...<a onclick="return example();" ...
>     <p>... 50 KB additional content ...
> 
> The user just wants to use whatever functionality associated to the
> link and the browser will encounter the first paragraph, render the
> link, and then continue loading the rest of the document. The author
> ensured that the example() function does not depend on anything that
> comes after the first link. As implemented in all mainstream
> browsers, the user can just click the link as soon as he can see it. 

OK, let's take a run at this from some different angles.

First, a "page" weighing in at 50 kb will take roughly 20 seconds to
fully load at 56 k baud.  This is certainly slow (I agree), but users at
this connection speed are accustomed to this speed of page delivery,
dynamic content or not.  So it may be frustrating, but I do not see
"harm".

I won't get into the whole "...event trigger needs to be input
independent" rant; suffice that it frustrates the hell out of me every
time I see "onclick" all by it's lonesome... Another discussion for
another day.

My understanding in this discussion however is that the point of the
"return example();" is to somehow modify the way the page renders
*while* loading, as opposed to the final rendering being re-loaded in a
modified state - what is often referred to as AJAX today.  This presents
some real serious usability/accessibility issues for those users who,
through no fault of their own, *must* await the final load of the
document so that the technology that they use to interact with *that*
mainstream browser will function.  Here I do see *harm*, if not
technical certainly in the fact that you have now given preferential
treatment to the sighted over the none-sighted - all in the name of
saving nano-seconds.  

As I have pointed out earlier, this in and of itself may not be a
technical reason to not pursue this "in stream" triggering, but
certainly it is a social one, and one which should be addressed in all
best practices guidelines IMHO.  The argument has been made that there
is nothing that "requires" the content author to fire their script
mid-stream download, but the simple fact that there are developers out
there that see this as benign in and of itself is scary.  It is not
benign!  It has a real impact on some users, unlike the alleged harm of
frustrating power users/speed readers who are somehow going to know to
click on a link before a page finishes rendering.

And yes, I know that this transcends technical and is probably straying
off topic for this list... But maybe sometimes we need to stray off
topic to look at the bigger picture.

> 
> With Mark's processing model, the user will have to wait until after
> the browser downloaded the whole document.
> If the browser renders the
> link, but does not allow to click it, users will be confused. If it
> allows it to be clicked, the link will not work. If it does not
> render the link until the document is fully downloaded, users will be
> annoyed. This is a serious regression for users regardless of how the
> model is actually implemented.      

Well, I understand what you are saying, but consider as well this simple
fact - before a person "clicks" any link, there is a cognitive
requirement for that user to mentally process the instruction that is,
or proceeds the anchor tag - "...to wow the socks off your girlfriend,
"click here"..." (he writes sarcastically... Some will get it others
won't).  Now the simple fact of reading these instructions will consume
some of these precious nano-seconds we are apparently trying to save in
the user's life; perhaps by the time they are finished mentally scanning
the whole page and reading these instructions the page still hasn't
completely downloaded, but I really think you are attaching way more
"frustration quotient" here than is really warranted.

> 
> Mark's only argument in favour of his processing model is that, if the
> example() function does depend on how much of what comes after the
> link is already loaded, there may be differences in how the script
> behaves: e.g., the script might add some element at the end of the
> <body> which has not been encountered by the browser, so the end
> might be anywhere between the link and the actual end of the <body>,
> so it is inserted at some "random" position, which is not desirable. 

Right, and yet we see just this kind of thing all over the web today on
"kool" sights pushing AJAX for AJAX sake.

> 
> Authors can easily work around problems of this kind, which means
> such a processing model brings very little benefit at serious cost. 

And again, I am having a real hard time understanding what this *cost*
really is... User frustration for a nano-second?  How serious is this
really?  
<rant>
I see poorly written content (complete with spelling mistakes), images
that have not been optimized for web delivery, and bloated flash web
animations that lend little to know real value to web-pages being far
more frustrating and antagonizing than having to wait for a page to
load.  Having poorly written JavaScript calls that launch popup windows
(which I have disabled in my Firefox browser deliberately) that cannot
be launched any other way (like into a new tab - thus rendering the
functionality completely broken in my set-up) as being way more
frustrating than the nano-second pause of waiting for a 50 kb page to
completely download before I can click link on an AJAX style
"application".  The fact that this type of crummy development practice
exists today (often on high-traffic, mainstream web pages) causes me to
fear that perpetuating this type of "instream" pre-processing will be
equally mishandled and abused.
</rant>

JF

Received on Thursday, 3 August 2006 04:47:41 UTC