RE: Dons flame resistant (3 hours) interface about Linked Data URIs

HTML worked because there *was* general agreement/consensus about the
implied semantics of the relatively "harmless" set of tags; and browsers
were extremely fault tolerant, as you rightly point out.

However being fault tolerant about whether some string of data should be
"understood" as a title, a sub-title, a headline, a paragraph marker,
etc. is several magnitudes of importance lower than being fault tolerant
about the meaning of your actual content. The history of technology of
the past decades is peppered with costly examples of failure because
different systems didn't "understand" each other's semantics.
Simplicity, as you would wish for it, is simply not a substitute for
precision and trust.

[ BTW, your three hours are up! ;-) ]
If you don't get that, you shouldn't be let near the semantic web!

As regards your theoretical dialogue, starting with  
Q: "How do I do x?"
Maybe another approach would be firstly to question: "why do you want to
do x in the first place? Who wants it and what does it bring?"
All the discussion about using 303 and hash are indeed tiring and
misguided but they highlight maybe a more fundamental issue: should
so-called web architecture be built according to good design principles
or, in the absence of good design, by much needed conventions? The
latter seems to dominate the debate at present but if semantic
technologies are to develop, there needs to be more attention to
fundamental design. IMO, the foundations are broken and they will not
support the weight of what is expected of them by the semweb.

Regards,
Peter

-----Original Message-----
From: semantic-web-request@w3.org [mailto:semantic-web-request@w3.org]
On Behalf Of Hugh Glaser
Sent: vendredi 10 juillet 2009 2:22
To: semantic-web@w3c.org; public-lod@w3.org
Subject: Dons flame resistant (3 hours) interface about Linked Data URIs

I am finding the current discussion really difficult.
Those who do not learn from history are condemned to repeat it.

As an example:
In the 1980s there were a load of hypertext systems that required the
users
to do a bunch of stuff to buy into them. They had great theoretical
bases,
and their proponents had unassailable arguments as to why their way of
doing
things was right. And they really were unassailable - they were right.

They essentially died.

The web came along - I could publish a bunch of html pages about
whatever I
wanted, simply by putting them in some directory somewhere that I had
access
to (name told to me by my sysprog guru), and suddenly I was "on the
web". If
the html syntax was wrong it was the browser's problem - don't come back
and
tell me I did wrong, make what sense of it you can, it's your problem.

Such simplicity, which was understandable by a huge swathe of people who
were using computers, and acceptable to their support staff, simply
swept
all before it (including WAIS, ftp, gopher).
Arguments about how "broken" the model was because of things like links
breaking and security problems were just ignored, and now seem almost
archaic to most of us.

I want the same for the Semantic Web/Linked Data.

Discussions of 303 and hash just don't cut the mustard in comparison. So
I
find it hard to engage in an extended discussion about them.
Discussion:
Q: "How do I do x?"
Me: "Try this."
Q: "This doesn't work, what now?"
Immediately says to me that "this" must be wrong - we should go away and
think of something better.

So would it really be so bad if people just started putting documents
with
RDF in on the web, where the URI for both the document and the thing it
was
about (NIR) got confused?
All I actually want is a URI that resolves to some RDF.
And even perhaps people would not run off to RDFa so quickly?

If I can't simply publish some RDF about something like my dog, by
publishing a file of triples that say what I want at my standard web
site,
we have broken the system.

<3 hours flame resistance starts />

Best
Hugh

Received on Friday, 10 July 2009 09:35:08 UTC