Re: Should validity be P1 or P2? (was RE: summary of resolutions from last 2 days)

John M Slatin wrote:

>Inneke described what Opera did with the code snippet I sent earlier.
>Since Opera is inaccessible to JAWS users, I would be grateful for a
>description of how Opera treats the full, validated example that I
>attached to the message after Inneke pointed out that the <th> element
>wasn't closed properly in the original snippet.
>  
>


I think that Ineke describes the normal behavior that Roberto Scano 
already mentioned with xml content. When served with mime type 
application/xhtml+xml, user agent shouldn't render xml pages that aren't 
valid. This is the supposed behavior, so there's no surprise. This is 
the way xml should be handled by user agent. It happens all the time 
with feed rss, that are parsed by new generation engine, that aren't 
built for tag soup, and are xml-compliant. Sometimes the feeds are 
unreadable cause some minor error.

Also some recent browsers (like Opera 8 and Firefox) have this behavior 
with xhtml 1.1 pages. xhtml 1.1 pages MUST be served as 
application/xhtml+xml, whereas xhtml 1.0, under some conditions, can be 
served as text/html. In this second case old and new browsers treat the 
page as if it where html, so they try to parse it even if invalid.

I personally think that the xml specification about not parsing invalid 
pages is a way to force validation, but it's also a way to prevent 
content to be read if not valid, so it has an ambiguos effect. It sounds 
to me like a bad thing from a user viewpoint, while a pleasure for 
browser builders: they don't have to worry about invalid documents.

I think this kind of inaccessibility is a paradoxical effect of w3c 
specs. With html, browser aren't told what to do when they encounter an 
invalid page, so they managed to render the page anyway: this is a 
usable behavior for users, but many people think this is bad because it 
costs on the developer side. I think technology should help people, not 
help code ortodoxy. It's like natural language guessing in google: if 
you make a mistype, google try to guess what you meant. With cpu running 
faster and ram costing less, I can't see a reason for browser to not try 
to correct little code errors. But this is a sort of religious argument, 
I know.

The only reason I can see right now to force validation is to go toward 
semantic web. It should be very difficult to make inferences from 
invalid code, far more difficult that trying to render a page. And 
oviously no xsl parsing would work with invalid xml: but this is an 
all-xml problem. Xml should be valid to take full advantage of it.

But then we should decide if semantic web is a mission of the wcag. In 
that case, there shouldn't be a reason to further discuss about validation.

Maurizio

Received on Monday, 20 June 2005 20:34:38 UTC