Re: XHTML 2.0 User Agent Conformance

Hi,
  Firstly, thanks for pointing out some issues with the idea.

Jim Ley wrote:

 >"Lachlan Hunt" <lhunt07@postoffice.csu.edu.au> wrote in message
 >news:3FA1E10C.1060408@postoffice.csu.edu.au...
 >
 >>  Due to the huge problem of authors marking up pages with invalid
 >>(X)HTML in the past, because of the attitude: 'if it displays well, I've
 >>written it well',
 >
 >I do not believe this is the reason why people have problems authoring 
valid
 >markup, it's certainly a reason why authors can not spend the extra time
 >working at it, but it is not the reason for the invalid mark-up in the 
first
 >place.

  Yes.  This is essentially what I meant, sorry for not being clearer.


 >>  ie. displaying an error message if the markup is invalid,
 >
 >This is a very bad idea, users do not need to be shown the mechanics,
 >they're interested in getting the content.

  This is a good point, I agree that many user's would not interested in 
the error, but:
  - What if UAs provided the option to turn the error message on/off?
  - What if the message was not intrusive?
    ie. it could be just like the way both IE and Netscape
    display Javascript errors in the status bar.
  - What if browsers continued to be fault tolerant?
    I'm only suggesting that UAs make it known that there may be an error.

 >User Agents have bugs, if a UA
 >has a bug which leads it to think my valid document is invalid and 
displays
 >error or fails to render the document - I've done nothing wrong, and there
 >is nothing I can do to fix it, but my clients and customers get a bad
 >impression of me.

  In my experience, UAs are able to check that a document is atleast 
well formed -- I don't think there would be many, if any, UA bugs in 
this area.
  Validating UAs may be more prone to bugs since they are required to 
check that a document conforms to a DTD or Schema.


 >Also XHTML 1.0 has shown the importance of the flexible rendering to allow
 >for user agents and Mark Up Language recommendation writers to develop new
 >standards which are compatible with what has gone before - XHTML 1.0 can
 >only be rendered by todays UA's because they were fault tolerant.  If we
 >remove fault tolerance we lose that ability to correct the errors in 
our ML.

  Rendering bugs in a UA do not affect whether a document is well formed 
and valid.  How would this be any different to updating any other XML 
document specifications?  Processing of other XML documents is halted if 
there is an error, though I'm no-longer suggesting such a drastic 
change, thanks your comments.


 >What happens with an errata which fixes an error in the original XHTML 2.0
 >specification, UA's pre-Errata would handle a document different from UA's
 >post-Errata, this would almost certainly not be sustainable.

There's would still be fault tolerance, however the UA can get the 
updated DTD/Schema when available.  (I don't believe any errata that 
didn't cause modification to the DTD/Schema would be any problem when 
processing a document)


 >Have you considered the cost of shipping a conformant browser?  The number
 >of hours in Mozilla are huge, yet they've still not delivered a conformant
 >browser, and all software has bugs, which are likely to impact negatively.

I didn't actually consider this, thank you for pointing it out.


 >A conformance requirement on servers which requires them to serve valid
 >documents to use the relevant mime-type for them, I'd support, that 
imposes
 >no burden on clients, and authors will know instantly if there are any 
bugs
 >in their server which require fixing. (They'll only be using a single
 >server, rather than the situation where they're serving it to a 
potentially
 >infinite number of clients)

I don't understand this idea.  How would an author, who was writing and 
testing a page on local machine find out about bugs this way, until the 
page was deployed and the server gave an error?  (sorry if I've 
misunderstood you about this point)

 >Jim.

CYA
...Lachy

Received on Friday, 31 October 2003 08:26:46 UTC