Validate Strict AND no robots

I wish to exclude robots from some of my html pages.  
A search of the w3 site for 'robots' linked me to:

http://www.robotstxt.org/wc/robots.html

Following a link 'Robots Exclusion' led me to:

http://www.robotstxt.org/wc/exclusion.html

and then: 

http://www.robotstxt.org/wc/exclusion.html#meta

which explained the META use:

    In this simple example: 

    <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">

    a robot should neither index this document, 
    nor analyse it for links. 

When used in an html file and validated against:

http://validator.w3.org/file-upload.html

errors occurred.  The validator page says:
"It checks HTML documents for conformance to W3C 
HTML and XHTML Recommendations and other HTML
standards."

I use the following lines of code at the 
start of my html documents:

    <?xml version="1.0"?>
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Strict//EN"
       
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml">

The validator took exception as explained below:
( please note that indentation may render 
  some symbols at an improper column. )

    Below are the results of checking this document 
    for XML well-formedness and validity. 

    Line 7, column 29: 
      <meta name="robots" contents="noindex, nofollow"
/>
                               ^
    Error: there is no attribute "contents" for this 
    element (in this HTML version) (explanation...)

    Line 7, column 50: 
      <meta name="robots" contents="noindex, nofollow"
/>
                                                    ^
    Error: required attribute "content" not specified
(explanation...)

Is it possible to validate to 'strict' AND exclude
robots?
Any comments or suggestions would be welcome.



=====
Regards,
Jim

__________________________________________________
Do You Yahoo!?
Yahoo! Auctions - buy the things you want at great prices
http://auctions.yahoo.com/

Received on Sunday, 13 May 2001 23:08:28 UTC