W3C home > Mailing lists > Public > public-html@w3.org > January 2009

Re: Who is the Intended Audience of the Markup Spec Proposal?

From: Simon Pieters <simonp@opera.com>
Date: Fri, 30 Jan 2009 13:22:03 +0100
To: "David Singer" <singer@apple.com>, "Michael(tm) Smith" <mike@w3.org>
Cc: "Ian Hickson" <ian@hixie.ch>, public-html <public-html@w3.org>
Message-ID: <op.uokkm1s1idj3kv@hp-a0a83fcd39d2>

On Fri, 30 Jan 2009 11:38:12 +0100, David Singer <singer@apple.com> wrote:

>> It's certainly the case that the HTML5 draft doesn't confine its
>> definition of what a conformant document is to only what's
>> machine-checkable. I think that's a good thing, and I think
>> anything else that sets out to describe what a conformant document
>> is should also not confine itself to only what's machine
>> checkable. On the fact of it at least, it does seem to me that
>> "document doesn't contain relative URLs when the base URL can't be
>> used to resolve URLs" seems like a constraint that ought to be
>> described in my draft.
> I appreciate it's more than a syntax question;  that's what makes it  
> perhaps not so naive.  I was pondering the whole question "what if the  
> URL is not capable of being de-referenced?"  (e.g.  
> http://deliberately.unknown.host.xw/).  Then there is the question of  
> URLs with unknown or inappropriate methods...clearly <something  
> src="mailto:someone@w3.org" /> is pretty odd, and one would be tempted  
> to say that the URL here must be a form that delivers content.  But then  
> is <something src="daveprotocol:random.content" /> conforming or not, if  
> you don't know what daveprotocol does...?

I think it comes down to the question of what the concept document conformance is for, which is for a specific QA tool, namely a validator. The idea is to set out some rules so authors can catch their mistakes and get out of bad habits.

In the case of URLs, the spec currently has a concept of "valid URL" to catch violations against the IRI spec and to avoid weird behavior with legacy encodings, but for a QA tool to be useful to catch typos and mistakes in URLs what one further needs is a link checker that tries to follow the links and reports 404s, etc.

I don't think we should say that, for instance, linking to a 404 resource is non-compliant, so even if we expand on the rules for document conformance around URLs a validator still wouldn't be very useful for catching URL mistakes.

Simon Pieters
Opera Software
Received on Friday, 30 January 2009 12:22:56 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 9 October 2021 18:44:42 UTC