W3C home > Mailing lists > Public > public-html@w3.org > April 2011

Re: Working Group Decision on ISSUE-120 rdfa-prefixes

From: Kurt Cagle <kurt.cagle@gmail.com>
Date: Mon, 11 Apr 2011 09:53:00 -0400
Message-ID: <BANLkTino1-76PKcPv=qSid2zL8s30zTmGQ@mail.gmail.com>
To: David Carlisle <davidc@nag.co.uk>
Cc: "Tab Atkins Jr." <jackalmage@gmail.com>, HTML WG <public-html@w3.org>
I had not quite intended to restart the thread here, and my apologies to the
chairs.

However, I also did wish to clarify a comment:

>From one coming from the XML side of the equation, it has been my impression
that the XML community does not, in toto, dispute that the current mechanism
for disambiguation that namespaces was intended to resolve is perfect - it
was a hack compromise intended to complete an action item and move on, and
no one at the time understood the degree to which namespaces would become
such an integral part of the specification. Just as the HTML community has
its legacy baggage to live with that seemed like good ideas at the time but
weren't, so too does XML, and namespaces represent a fairly significant sore
point. There is no master taxonomy list, only terms that more or less model
a given topical domain to some greater or lesser degree, and as there may be
different provenances for the authority of those terms (try working with
government ontologies some time) there is a need to differentiate and
disambiguate common terms with distinct meanings.

The prefix mechanism started out in exactly the same way that David Carlisle
mentioned, but I believe it can be convincingly argued that in practice
prefixes are important because they are aids to notation. XSLT is a good
case in point. Assume for the nonce that you DO retain fully resolved
namespace URIs, then in practice you get documents that look something like:

<http://www.w3.org/1999/XSL/Transform#transform version="2.0">
      <http://www.w3.org/1999/XSL/Transform#output method="html"/>
      <http://www.w3.org/1999/XSL/Transform#template match="/">
             <http://www.w3.org/1999/XSL/Transform#apply-templatesselect="*"/>
      </http://www.w3.org/1999/XSL/Transform#template>
      <http://www.w3.org/1999/XSL/Transform#template match="foo">
             <http://www.w3.org/1999/xhtml#html>

<http://www.w3.org/1999/XSL/Transform#apply-templatesselect="*"/>
             </http://www.w3.org/1999/xhtml#html>
      </http://www.w3.org/1999/XSL/Transform#template>
      <!-- much, much more like this -->
</http://www.w3.org/1999/XSL/Transform#transform>

Not only does this make a verbose language downright unreadable, but it
contains a great deal of redundancy of content that really serves neither
machines (except VERY simplistic processors) nor humans.

The prefix mechanism that was established isn't perfect, but its intent,
even if not reflected in the initial charter, was certainly to reduce that
redundancy significantly.

My comments concerning prefixes initially was not to state that,
preferentially, prefixes should in fact be used for this disambiguation of
context independent of URLs. It was rather that if the issue was that naive
users did in fact use prefixes without appropriate bound URIs that there
should be a mechanism in place that identified that when commonly used
prefixes ARE in fact used - such as dc (dublin core), vcard, foaf and so
forth, and no namespace URIs are specifically associated with those
prefixes, that an assumption is made on the browser (presumably via a lookup
table somewhere) that the prefix is indicative of the namespace IN HTML
ONLY. It's not foolproof, of course, each browser may decide to include or
not include specific namespaces, but if lax human editing conventions are
considered a necessary prerequisite to the success of  HTML5 (which seems to
be a base assumption), then this would seem to be a reasonable fallback
position.

Please note, I'm not trying to stimulate any further debate here, I just
wanted to correct some assumptions on what I wrote earlier to clarify the
issue here. I personally think that the decision as it stands is a
reasonable one, if I read that decision correctly.

Kurt Cagle
Invited Expert, XForms Working Group, W3C
Managing Editor, XMLToday.org
kurt.cagle@gmail.com
443-837-8725




On Fri, Apr 8, 2011 at 12:43 PM, David Carlisle <davidc@nag.co.uk> wrote:

> On 08/04/2011 17:14, Tab Atkins Jr. wrote:
>
>> ... If someone believe that
>>
>> *prefixes*  are the mechanism by which you disambiguate mixed languages
>> (rather than one possible solution to the problem of "using URIs to
>> disambiguate mixed languages makes hand-authoring hard"), you'll draw
>> incorrect conclusions.
>>
>
> > ....  Machines don't have the
>
>  problem that prefixes attempt to solve, so we shouldn't worry about
>> them as a class of producers - prefixes, if they are kept, must solely
>> be optimized for human hand-authoring, as that was their original (and
>> currently unchanged) purpose.
>>
>>
> The claim that prefixes (in XML Names) were introduced to simplify hand
> authoring isn't really compatible with the "1. Motivation and Summary"
> in the original namespace spec
>
> http://www.w3.org/TR/1999/REC-xml-names-19990114/#sec-intro
>
> which gives as the only reason:
>
> URI references can contain characters not allowed in names, so cannot be
> used directly as namespace prefixes. Therefore, the namespace prefix serves
> as a proxy for a URI reference.
>
>
> In other words prefixes were introduced as a way to use URI-qualified names
> without changing the (assumed fixed, given from XML/SGML) XML name syntax.
> As such it applies equally to machine or hand produced content.
>
> The syntax rules for HTML (might be) a bit more flexible so perhaps there
> would be more flexibility in an html context (but perhaps not, real life
> constrains html design perhaps more than SGML heritage constrains XML
> design, as you know). So while it may be true that the original motivation
> for prefixes doesn't apply to html, that wouldn't be related to machine
> generated content.
>
>
> David
>
> ________________________________________________________________________
> The Numerical Algorithms Group Ltd is a company registered in England
> and Wales with company number 1249803. The registered office is:
> Wilkinson House, Jordan Hill Road, Oxford OX2 8DR, United Kingdom.
>
> This e-mail has been scanned for all viruses by Star. The service is
> powered by MessageLabs.
> ________________________________________________________________________
>
>
Received on Monday, 11 April 2011 13:53:59 UTC

This archive was generated by hypermail 2.3.1 : Monday, 29 September 2014 09:39:24 UTC