W3C home > Mailing lists > Public > www-archive@w3.org > April 2009

Re: Review Comments for draft-nottingham-http-link-header-05

From: Julian Reschke <julian.reschke@gmx.de>
Date: Fri, 17 Apr 2009 20:43:20 +0200
Message-ID: <49E8CDC8.7060905@gmx.de>
To: "Sean B. Palmer" <sean@miscoranda.com>
CC: HTTP Working Group <ietf-http-wg@w3.org>, www-archive <www-archive@w3.org>
Sean B. Palmer wrote:
> On Fri, Apr 17, 2009 at 6:29 PM, Julian Reschke wrote:
> 
>> But DC is not using PURL, so this is just a similar problem.
> 
> But DC *is* using PURL. You even said so in your email:
> 
>   ‘Retrieving "http://purl.org/dc/elements/1.1/date" yields a 302 redirect.
>   So is Dublin Core violating WebArch, and breaking RDF?’
> 
> The purl.org domain is the PURL server.

Oops. Sorry.

>> That being said: if something as widely used as DC (it is, isn't it)
>> violates the principle, what *effect* does it have? In practice?
> 
> I don't know, it's not my responsibility to defend TAG resolutions and
> RDF specifications. I don't even agree with their design decisions.
> But just ignoring them is going to create new problems.

Understood.

But the fact that the distinction (or the lack of implementation of it) 
doesn't seem to have any effect in practice makes me very skeptical 
about future adoption.

> And anyway, we're reviewing something which is supposed to become an
> RFC. It would be a little strange for us to pick and choose which
> standards we follow and which we don't!

A TAG finding is not a standard. And even if it was, not stating 
anything about retrieval wouldn't conflict with it.

>> What seems to be problematic [i]s the "Information Resource"
> 
> Yes, but as soon as you start to use a URI for something in this way,
> you will get this problem. And Link is so obviously close to the RDF
> model that people are bound to say, “we can harvest RDF from this.”

Indeed.

> What I'm saying is, you cannot avoid this problem by ignoring it.

But people are, and it doesn't seem to cause problems (I'm honestly 
trying to understand the implications!).

> As soon as I saw this URI extension relations mechanism in the draft,
> I realised that it would cause trouble, so I suggested an alternative
> which wouldn't.
> 
> Another alternative would be to remove the extension relations
> entirely. This would be fine by me, and I think that extensibility in
> this area tends to be overrated. Consider, for example, the URN nid

Extensibility is already there for Atom relations, and as far as I 
recall, the TAG likes URI-based extensibility. I wouldn't want to 
sacrifice it because of the 303 debate.

> mechanism. Or even the TAG's current issue about URIs for media types:
> 
> http://www.w3.org/2001/tag/issues.html#uriMediaType-9
> 
> uriMediaType-9 is a strange issue because media types are so seldom
> registered, and so seldom is there a good reason to create a new one,
> that it's worth having a registry for them. You don't need URIs to
> solve this.
> 
> On the other hand, the situation with @rel is a hot topic because of
> the way that the HTML WG have tried to solve the extensibility
> situation, by having a link types registry wiki. Some people object to

...including me....

> this, and Sam Ruby commented recently on the matter:
> 
> “If transparency and approachability are the solutions, then we need
> something radically more transparent and approachable than a wiki
> page.  Now that’s a sobering thought.”
> — http://intertwingly.net/blog/2009/04/14/
> 
> Reversed domain names would solve the social problem without getting
> into architectural permathreads about information resources. But

Reversed domain names would be a new approach to do distributed 
extensibility. If this would be used, people will build bridges between 
the new system and URI based extensibility anyway. So, in essence, it 
would just introduce a new level of indirection, and the original issue 
wouldn't go away.

> whatever the solution, as long as the issues are dealt with properly
> then the specification will have integrity that it ought to have as an
> RFC.

I like the idea of the semantic web. I want things to be compatible with 
RDF. I believe that distributed extensibility based on URIs is good. I 
use it myself.

However: I'm unconvinced that the 303 solution is a good one, and that 
the semantic web will stop working when it gets ignored. If the semantic 
web community is so convinced about it, why isn't there a W3C 
Recommendation which clearly states how to deal with RDF predicates that 
happen to be identified by an HTTP URI? (or is there?)

BR, Julian
Received on Friday, 17 April 2009 18:44:15 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 7 November 2012 14:18:21 GMT