Re: validator - how true is it?

Ok. I see where my mistake is. I ... and, I believe, many others too ... consider the Validator report to show inconsistencies within the browser (s). But if the site shows all right within IE, Firefox, Opera, etc, then it should be considered a good html-coded site. 
I believe if I at the very beginning indicate  <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> and remove /> for self-sufficient tags like IMG, META, etc it should solve all the problems. 
However, all these won't improve my browser performance. 

Mark Sonrello  

Alierra Design Company 

http://www.alierra.com  

email: ceo@alierra.com 

 




----- Original Message ----- 
From: "David Dorward" <david@dorward.me.uk>
To: <ceo@alierra.com>
Cc: <www-validator-css@w3.org>; <www-validator@w3.org>
Sent: Friday, January 28, 2005 4:05 PM
Subject: Re: validator - how true is it?


> (NB: Since this thread is about the Markup Validation Service and not
> the CSS Validation Service, I've CCed this to the www-validator@w3.org
> list and set the Reply-To header to the same.)
> 
> On Fri, Jan 28, 2005 at 03:40:39PM +0200, ceo@alierra.com wrote:
> 
> >     1. On the very front page of your respected http://validator.w3.org you
> >        should mention that the program is not bug-free. Otherwise, the
> >        absence of this message makes people believe that validator is a
> >        reliable program.
> 
> No software is bug free, but the Markup Validation service is a
> reliable program. It rarely gets things wrong - certainly not (IMO)
> sufficiently often enough to display a "Don't trust me!" message.
> 
> >     2. Have http://www.msn.com, http://www.google.com, or http://www.ebay.com
> >        typed into the address area. How will you comment their error report?
> >        I will doubt that MSN, Google, or Ebay corporations hire the worst
> >        html-coders.
> 
> Browsers have hefty error correction routines in their markup
> parsers. This leads to websites authors writing sloppy code. Large
> companies are not commonly exceptions.
> 
> >     3. I had my own site http://www.alierra.com "validatored". I particularly
> >        liked the following mistake:
> > 
> >    Line 11, column 6: end tag for element "HEAD" which is not open
> > 
> >    </HEAD>
> > 
> >    However, line 2 has the following tag <HEAD>.   
> > 
> >    How will you comment this?
> 
> Since you have no Doctype (hint: Fix the first error first, it can
> have consequences on later errors), the validator assumes you are
> using HTML 4.01 Transitional.
> 
> In HTML 4.01 Transitional the end tag for <head> is optional, as is
> the start tag for <body>.
> 
> In your <head> section you have <meta http-equiv="Content-Type"
> content="text/html; charset=iso-8859-1" />, using XHTML style
> self-closing tags in HTML is a mistake. Under HTML rules <foo /> means
> the same as <foo>> or <foo>&gt;.
> 
> Since character data (such as a greater than sign) is now allowed in
> the <head> section, but it is allowed in the <body> section this
> implies that you close the <head> section and open the <body> section
> immediately before the > sign.
> 
> So your code reads the same as:
> 
> <head>
> <title>Alierra Custom Website Design Company - Professional Design and Consulting Services</title>
> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" >
> </head>
> <body>
> &gt;
> <link href="img/styles.css" type="text/css" rel=stylesheet>
> 
> So this is your mistake, not the validators. If you had enabled the
> "Show Parse Tree" option of the Validator, it would have shown you
> this.
> 
> >     4. I believe every site, which is a more or less complicated, will have
> >        at least 30 mistakes within the Validator.
> 
> Well then, lets pick some sites. A few W3C sites, a few by people I
> know, and a bug bunch picked out of the list of Blogs I fetch the RSS
> feeds from:
> 
> http://validator.w3.org/check?uri=http%3A%2F%2Fdorward.me.uk%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fw3.org%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fgreytower.net%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Foffog.org%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fvalidator.w3.org%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fwebstandards.org%2Fbuzz%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fwww.theregister.co.uk%2F&charset=%28detect+automatically%29&doctype=%28detect+automatically%29
> http://validator.w3.org/check?uri=http%3A%2F%2Fsimon.incutio.com%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fphotomatt.net%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fphotostack.org%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fweblog.delacour.net%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fln.hixie.ch%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fwww.benhammersley.com%2F
> http://validator.w3.org/check?uri=http%3A%2F%2Fwww.benmeadowcroft.com%2F
> and so on.
> 
> >    I was just merely saying that inexperienced users firmly believe in
> >    Validator and they require their sites to be in conformity with its rules.
> >    Whereas, the users do not understand that the rules are not perfect.
> 
> It is true that the rules are not perfect, however there is very
> rarely a good reason to break them[1], and the Markup Validation
> Service rarely makes a mistake when checking if a document follows
> them.
> 
> [1] I'm being liberal when I say that; I've never found a good reason
> to break them.
> 
> -- 
> David Dorward                                      http://dorward.me.uk
> 
> 
> 

Received on Friday, 28 January 2005 15:07:34 UTC