W3C home > Mailing lists > Public > www-archive@w3.org > April 2009

Re: Review Comments for draft-nottingham-http-link-header-05

From: Julian Reschke <julian.reschke@gmx.de>
Date: Fri, 17 Apr 2009 22:26:20 +0200
Message-ID: <49E8E5EC.1040000@gmx.de>
To: "Sean B. Palmer" <sean@miscoranda.com>
CC: HTTP Working Group <ietf-http-wg@w3.org>, www-archive <www-archive@w3.org>
Sean B. Palmer wrote:
> On Fri, Apr 17, 2009 at 7:43 PM, Julian Reschke wrote:
> 
>> A TAG finding is not a standard.
> 
> Well, who decides how much standing a TAG finding, or a W3C
> recommendation, or an IETF RFC, or an ISO standard has?

Dunno.

What I was trying to say is that if the W3C wants to make a normative 
statement, it should do so by issuing a Recommendation, because that's 
the closest thing in the W3C Publication System to a Standard.

> But even socially speaking, there is some obvious substance here: the
> TAG finding was announced by Roy Fielding, an author of the HTTP RFC.
> And the TAG chair is another author of the HTTP RFC.

I happen to be aware of that, as I've been spending some time editing 
the latter lately :-).

Anyway, before you cite Roy as authority sanctioning the httprange 
finding, you may want to search the TAG mailing list archive for more of 
his emails around this topic 
(<http://lists.w3.org/Archives/Public/www-tag/2008Feb/0086.html> is an 
example).

> This isn't just somebody's opinion on a mailing list, it's a W3C
> resolution of something that had been causing intense argument for
> years and years.

And continues to do so.

>> And even if it was, not stating anything about retrieval
>> wouldn't conflict with it.
> 
> Well it makes the Web Linking specification inconsistent. If you can
> use extension relations as RDF properties, then the only way you can
> tell whether they're valid is to dereference them. And Web Linking
> says you SHOULD NOT dereference them... so why force people to do it?

Could you elaborate why you need to dereference the URI of an RDF 
predicate to validate it?

> You can be compatible with RDF, or you can be incompatible. But at the
> moment it's being quasi-compatible, which isn't good.
> 
>> (I'm honestly trying to understand the implications!).
> 
> The most concise way that I can put it is this:
> 
> If Link is supposed to be compatible with RDF, then you have to
> explain why you're not mandating 303 responses for extension
> relations. If Link is not supposed to be compatible with RDF, then you
> have to let RDF people know so that they won't be misled.

Again, please cite a normative document that states that the URI used to 
identify an RDF predicate needs to be a non-information resource.

>> Extensibility is already there for Atom relations
> 
> It'd be interesting to survey how often Atom IRI relations are used in
> proportion to Atom registered relations.

Dunno.

> ...
>> Reversed domain names would be a new approach to do distributed
>> extensibility. If this would be used, people will build bridges between
>> the new system and URI based extensibility anyway.
> 
> Possibly, but possibly not? The HTML WG link types registry makes it
> look like people care about extensibility. But, to be cynical, perhaps
> they really care about politics and transparency and so on?

What registry do you mean? The one proposed by the WHATWG? The one 
proposed by the XHTML2 WG?

As far as I can tell, HTML4 didn't have a registry, which is part of the 
problem we need to solve.

> Those are good things to care about, but it's not Web Linking's place
> to try to fix that. The point should be to judge how often people are
> really going to want to implement extensions. How can we gauge how
> popular it will be?

We have evidence that people make up new relation names. Whether a 
single central registry is sufficient to deal with that is another 
question. History shows that people try to avoid registration 
procedures, thus having another way to avoid naming conflicts seems to 
be good.

>> I believe that distributed extensibility based on URIs is good.
> 
> Where do you use it, incidentally?

Myself? In WebDAV (properties, protocol extensions such as condition 
names or report names). In JCR (Java Content Repository), identifying 
property and node types. When extending other people's XML vocabularies. 
All the time, really.

>> If the semantic web community is so convinced about it, why isn't
>> there a W3C Recommendation which clearly states how to deal with
>> RDF predicates that happen to be identified by an HTTP URI?
> 
> I'd guess because the RDF Core WG was disbanded before the TAG
> resolution was made, and because it tends to be covered in notes such
> as the following:
> 
> http://www.w3.org/TR/swbp-vocab-pub/
> Best Practice Recipes for Publishing RDF Vocabularies
> 
> http://www.w3.org/TR/cooluris/
> Cool URIs for the Semantic Web
> 
> Again I'm not defending this design, and my proposal is to eschew the
> thing entirely. But if you're going to encourage compatibility with
> RDF, it's got to be done right if Web Linking to be a decent
> specification.

I think it would be helpful if you could point out how having link 
relation URIs resolve to 200 actually is in conflict with an RDF related 
spec, and also how it would have an effect in practice (given the fact 
that DC URIs today do not resolve to 303s either).

BR, Julian
Received on Friday, 17 April 2009 20:27:12 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 7 November 2012 14:18:21 GMT