W3C home > Mailing lists > Public > public-xg-webid@w3.org > December 2011

Re: Important Question re. WebID Verifiers & Linked Data

From: Mo McRoberts <mo.mcroberts@bbc.co.uk>
Date: Thu, 22 Dec 2011 17:41:16 +0000
Cc: public-xg-webid@w3.org
Message-Id: <0C0FB526-D1C4-4B65-AFDB-FD8AE203BCF9@bbc.co.uk>
To: Kingsley Idehen <kidehen@openlinksw.com>

On 22 Dec 2011, at 17:15, Kingsley Idehen wrote:

> On 12/22/11 11:33 AM, Mo McRoberts wrote:
>> On 22 Dec 2011, at 16:23, Kingsley Idehen wrote:
>>> Turtle is missing from the list. Contemporary Linked Data tools prefer Turtle over RDF/XML. By this I mean: there are far more tools today that process Turtle than there are those that process RDF/XML.
>> citation?
>> There are lots of tools which support Turtle, and some people find it easier to read and write by hand than RDF/XML.
>> No software which supports Turtle is meant to be supporting that and *not* RDF/XML, given RDF/XML's position in the specs…
> You can pop over to the Linked Open Data mailing list and ask that question.  I guess my close proximity to publishing and consuming Linked Data across the Linked Data Cloud hasn't been factored into your request for citations.

On the contrary, I assumed that your proximity to such meant that you’d have a link to hand to a list of RDF libraries showing which ones don’t support RDF/XML.

> Nobody publishes RDF/XML solely circa. 2011. Many publish Turtle and N-Triples without RDF/XML.

I didn’t claim otherwise.

> Nobody serious Linked Data player codes for a specific representation. They de-reference URIs, leverage Linked Data discovery patterns, and in most case perform a modicum of content negotiation.

And this negates my statement that RDF *consumers* can all deal with RDF/XML how, exactly?

>>>> Similarly, there needs to be working somewhere which makes HTTP and HTTPS a MUST with other schemes a MAY, but reading the spec I couldn't figure out entirely where to insert it -- I think the first couple of paras of §2.1 may need rewording to make clear the relationship between the SAN URI and the document.
>>> URI abstraction is scheme agnostic. HTTP should be a suggestion. In reality, nobody is going to make an HTTP alternative as part of their WebID implementation. At the same time, implementers will support other schemes and bridge to HTTP. A mailto: or acct: scheme URI will always be a more intuitive WebID than an http: scheme URI.
>> No. In reality, do this, and somebody will come along with their “WebID verification agent” which only supports their pet scheme, and claim it's perfectly in line with the specs, and the only people who will suffer will be end-users.
> WebID cannot claim to be AWWW and Linked Data compliant if what you claim is true. As I said yesterday, WebID either conforms or it doesn't. There are no gray areas here.

The words “linked data” appear once in the specification:

“WebID authentication can also be used for automatic authentication by robots, such as web crawlers of linked data repositories”

The Linked Data design note states:

* Use HTTP URIs so that people can look up those names.

Your assertions about what is or isn’t negotiable in all of this seem to be solely your own.

> A WebID client (e.g. a Verifier) should de-reference URIs. Use content negotiation and resource discovery patterns to locate resource types it can handle.

This is an implementor’s specification. It must state precisely what URI schemes conforming implementations are required to support (and ideally not limit what *other* schemes they *can* support). If the answer is “none”, WebID is practically useless. If the answer is “pick any you like”, WebID is practically useless. If the answer is “all of them”, WebID is unimplementable. 

It’s perfectly reasonable for a specification to state baselines. This is why RFC2119, and the military equivalents which preceded it, exist.

> Please also note, the first community of call for WebID would be the Linked Open Data community. This community understands these matters. The Web 2.0 community understands resource discovery patterns via <head/> and <link/> relations. They typically know how to handle HTML and JSON. XML is seen as a relic most are veering away from.
> If WebID makes the tweaks I am suggesting, it will serve Linked Data aficionados and Web 2.0 developers effectively, without undue comprehension, uptake, and political inertia.

Great. Now what about _all of the other people_?

>>> Turtle has to be there right now. Keeping it out is also kinda contradictory. Remember, SPARQL query patterns used in WebID examples are based on Turtle. Without Turtle we wouldn't have SPARQL. Without SPARQL (albeit an implementation detail) you wouldn't have the exponentially growing Linked Open Data Cloud of today.
>> It’s not 'keeping it out', but it’s not mandated.
> Let's not play with words here.

Actually, let’s do play with words, because this whole discussion is about precisely that:

> Turtle should have equal standing with RDF/XML.
> Microdata should have equal standing with RDFa.
> If the WebID spec cannot do this, then I will tell you now, its going nowhere fast!

It’s nothing to do with “cannot do this”, because with the wording I proposed it’s very open-ended. Indeed it’s very simple. You are proposing that:

- Turtle should be added to the list of data formats which consumers MUST support

- Microdata should be added to the list of data formats which consumers MUST support

There, we have two straightforward issues, one of which already exists — having managed to tease it out of an equally arduous thread.  I honestly don’t understand why raising these involves these epic essays; it’s a waste of everybody’s time and bandwidth.

>> With each that you mandate, though, you’re increasing the burden on implementors, and DECREASING the likelihood that people who build ordinary websites will actually bother with any of it.
> No I am not. I am requesting that WebID sticks to the AWWW. There is no specificity in the AWWW. It allows implementers to choose their components. This implicit flexibility leads to self-standardization as you can see re. HTML, HTTP etc..

Why bother specifying anything at all?

In fact, why bother writing a _specification_ at all? Why not just write a position paper and be done with it?

> Do you think URIs, HTTP, and HTML succeeded due to specification mandates? Of course not. They succeeded on the back of their implicit merits showcased via applications.

> Engineers make choices when implementing specs. Specs are not supposed to teach engineering.

They’re not supposed to teach engineering _principles_. They ARE absolutely supposed to tell you what you can and cannot do, and if you’re authoring a specification you have to think about what the burdens being imposed are, and what the consequences might be.

As an engineer, I should not need to ask myself the question “what happens when I encounter a WebID certificate containing a set of profile URIs, none of which I can’t handle?” because the specification should answer that.

As a user, I’m not even going to get as *far* asking myself what happens when I encounter a WebID consumer which can’t handle my URIs; when it occurs I’ll just ditch WebID as being useless for practical purposes.


Mo McRoberts - Technical Lead - The Space,
0141 422 6036 (Internal: 01-26036) - PGP key CEBCF03E,
Project Office: Room 7083, BBC Television Centre, London W12 7RJ

This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.
Received on Thursday, 22 December 2011 17:41:48 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:39:50 UTC