W3C home > Mailing lists > Public > semantic-web@w3.org > July 2009

Re: Dons flame resistant (3 hours) interface about Linked Data URIs

From: Hugh Glaser <hg@ecs.soton.ac.uk>
Date: Fri, 10 Jul 2009 12:08:13 +0100
To: Steve Harris <steve.harris@garlik.com>, Richard Light <richard@light.demon.co.uk>
CC: semantic-web at W3C <semantic-web@w3c.org>, "public-lod@w3.org" <public-lod@w3.org>
Message-ID: <EMEW3|d964929549feea4d438bed8cd99fceebl69C8Y02hg|ecs.soton.ac.uk|C2C0%hg@ecs.soton.ac.uk>
Thank you all for not (yet) incinerating me.
Some responses:

I'm not really fussed about html documents - to me they aren't really "in" the semantic web, other than the URL is a string, which can be interpreted as something that can use the same access mechanisms as my URIs to find some more text. I do publish html documents of my RDF, but that is only to permit aficionadas to browse the raw data.
If I actually have html documents, then something like RDFa is probably a great way of doing things.

Many people worry about the modelling, which is great and why RDF is so good.
But I start more from the consumer's end rather than the modeller's, and work back through the publisher.
Does anyone actually have a real application (and I am afraid I don't really count semantic web browsers as applications) that has a problem getting the RDF if I have a file at
which contains
<http://wrpmo.com/hg> <http://xmlns.com/foaf/0.1/name> "Hugh Glaser" .
<http://wrpmo.com/hg> <http://www.aktors.org/ontology/portal#has-web-address> "http://www.ecs.soton.ac.uk/people/hg" .
and I then use http://wrpmo.com/hg as one of "my" URIs?
Certainly doesn't bother my applications.
And your average enthusiastic sysprog or geek can understand and do this, I think - that's why RDFa is getting popular.
I know that things like dc:creator can be a little problematic, but we are paying a high price, I think.

Steve's comments on using vi are interesting.
Yes, we used vi and hackery.
In fact I still generate those old web pages by running a Makefile which finds a .c source and calls the C preprocessor to generate the .html pages, and I certainly started this in the early 90s.
At the moment I use all sorts of hackery to generate the millions of triples, but the deployment is complex.

Is it really such a Bad Thing if I do http://wrpmo.com/hg, if the alternative is that I won't publish anything?
Surely something is better than nothing?
In any case, just like html browsers, linked data consumers should deal with broken RDF and get the best they can out of it, as going back and telling the server that the document was malformed, or reporting to a "user" is no more an option in the linked data world than it is in the current web.

Of course, as a good citizen (subject? - footsoldier?) of linked data and the semantic web, I hope I do all the stuff expected of me, but it doesn't mean I think it is the right way.

Thank you very much for the considered responses to such an old issue.

On 10/07/2009 11:13, "Steve Harris" <steve.harris@garlik.com> wrote:

On 10 Jul 2009, at 10:56, Richard Light wrote:
> In message <7544285B-E1B1-48A4-96E0-BDED62175EA8@garlik.com>, Steve
> Harris <steve.harris@garlik.com> writes
>> On 10 Jul 2009, at 01:22, Hugh Glaser wrote:
>>> If I can't simply publish some RDF about something like my dog, by
>>> publishing a file of triples that say what I want at my standard
>>> web site,
>>> we have broken the system.
>> I couldn't agree more.
>> <rant subject="off-topic syntax rant of the decade">
>> Personally I think that RDF/XML doesn't help, it's too hard to
>> write by hand. None of the other syntaxes for RDF triples really
>> have the stamp of legitimacy. I think that's something that could
>> really help adoption, the the same way that strict XHTML, in the
>> early 1990's wouldn't have been so popular with people (like me)
>> who just wanted to bash out some text in vi.
>> </>
> Well, in my view, when we get to "bashing out" triples it isn't the
> holding syntax which will be the main challenge, it's the Linked
> Data URLs. Obviously, in a Linked Data resource about your dog, you
> can invent the URL for the subject of your triples, but if your Data
> is to be Linked in any meaningful way, you also need URLs for their
> predicates and objects.
> This implies that, without a sort of Semantic FrontPage (TM) with
> powerful and user-friendly lookup facilities, no-one is going to
> bash out usable Linked Data.  Certainly not with vi.  And if you
> have such authoring software, the easiest part of its job will be
> rendering your statements into as many syntaxes as you want.

I think that's a fallacy. I the web wasn't bootstrapped by people
wielding Frontpage*. It was people like Hugh and I, churning out HTML
by hand (or shell script often), mostly by "cargo cult" copying
existing HTML we found on the Web. That neatly sidesteps the schema
question, as people will just use whatever other people use, warts,
typos, and all.

The tools for non-geeks phase comes along much later, IMHO. First we
have to make an environment interesting enough for non-geeks to want
to play in.

Happy to be demonstrated wrong of course.

- Steve

* Frontpage wasn't released until late '95, and wasn't widely known
until late '96 when it was bought by MS. By which time the Web was a
done deal.

Steve Harris
Garlik Limited, 2 Sheen Road, Richmond, TW9 1AE, UK
+44(0)20 8973 2465  http://www.garlik.com/
Registered in England and Wales 535 7233 VAT # 849 0517 11
Registered office: Thames House, Portsmouth Road, Esher, Surrey, KT10
Received on Friday, 10 July 2009 11:10:04 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 07:42:13 UTC