W3C home > Mailing lists > Public > semantic-web@w3.org > April 2010

Re: connections

From: Alexander Johannesen <alexander.johannesen@gmail.com>
Date: Sat, 17 Apr 2010 20:30:39 +1000
Message-ID: <j2qf950954e1004170330mfeb79adfrd4c15e0e86c7c90b@mail.gmail.com>
To: Danny Ayers <danny.ayers@gmail.com>
Cc: Lee Feigenbaum <lee@thefigtrees.net>, Pat Hayes <phayes@ihmc.us>, Semantic Web <semantic-web@w3.org>
Hola,

Danny Ayers <danny.ayers@gmail.com> wrote:
>>> If we had compelling enough applications of the *data*, wouldn't we build
>>> the tools we need?
>>
>> Why?
>
> Because I want to know where the nearest kennels are, and when will be
> best to plant tomatoes.

No, no; why is there some automatic notion that if our data is
compelling enough, the tools we need will be created? I'm not
questioning the need nor want for compelling applications, only the
assumption that once we get to stage 1, stage 2 will automatically
follow. We're all looking for that killer application, but perhaps
we're mistaking the killer app for techies for the killer app for the
real-world?

> Google do seem to have noticed that the hocus pocus (whether or not
> they call it RDF) has its place.

I was more pointing to RDF being the culprit. When Google wants to buy
a few million bibliographic records, do they embrace MARC, MARC XML,
MODS, MADS, or RDF versions of the same? Nope, they create some simple
XML to represent the very basics they feel they need, and use that.
Same with most of the RDF data; silo mentality of the value of
datasets are incredibly hard to evaluate in the Linked Data world; you
have to take on good faith that the quality within is good enough for
whatever killer app you're writing. And quite often you only discover
lacks and poor data quality once you've gone down the path of
development for a while, never a pleasant journey. Are you expecting
killer apps based on data with faith-based quality control, and big
hurdles for evaluation of value?

>> The Semantic Web was crafted on the potential of fixing problems a tad
>> bit better than what we already had that already fixed the problems,
>
> I disagree somewhat - would take me a while to find the exact quote,
> but Tim has stated words to the effect that the semweb can make
> problems previously considered impossible become a bit obvious. (A
> point with which I agree strongly).

You are of course right, but all of that is theory. In practise we are
rehashing old problems in new ways. I guess what you're longing for is
the tipping point of going from solving those problems to solving new
ones.

>> so basically fixing a non-existent problem. It was also built on the
>> promise of reusable ontologies on top of data, and even though the
>> promise wasn't held the potential is still there, for sure. But we
>> haven't got the tools to deal with that part of it all that took us
>> (speaking in generic fuzzy terms here) by surprise;
>
> But we (in the affluent West at least) each have the hardware,
> software and connectivity to put us in the zone of making real use of
> this stuff. I still don't understand why we are so slow at making it
> so.

Because we suck at coming up with good ideas, and even worse at
throwing something together to prove a point. If this stuff was easy,
we probably would see tons of it. But we don't, and I suspect that the
tooling sucks in a sense that it is hard for people in the real-world
to wrap their heads around them. SGML was brilliant, but hard to fully
grasp. And we know who's your generic markup daddy.

> "informolasses" goes straight into my vocab, thanks.

You heard it here first. :)

> I suspect you're right about domain-specific tools, that reflects the
> human issues, the need to solve specific problems.
> While the Web of docs can be very generalist, I'm not so sure the Web
> of (linked) data will be useful in the same way, at least in the near
> term.
> For example, when I'm in gardening mode, I want a gardening
> application - that uses global data but within a locale filter.

I have tons of similar problems. Even online tools I know how to use
and hack and exploit can sometimes draw up a blank. Like finding a
Guinea Pig breeder on the south coast of Sydney when you need one; 1)
there might not actually be any, or 2) there is no information about
them on the web to be crawled. The problem is not that they haven't
published their details in glorious Turtle.

But is this stuff really the same problem as the Linked Data and lack
of killer apps, though?

>> All this data and their weak relationships are great to play with,
>> though, and it might shape things to come, but to get the masses to do
>> something interesting with it you need to convince them that
>> "ontology" is even a word that deserves a place in our daily
>> languages. (And don't tell me linked data doesn't need ontologies; a
>> kick in the shin if you do) Tough call, I'd say. If you say to them
>> "model", they immediately reach for Toad or some RDBMS thingy. If you
>> say "triplet" or, even worse, "tuple", they might think you're talking
>> about raising kids.
>
> Kick me in the shin - ontologies are no more and no less than shared
> vocabularies through which we can communicate.

I can't kick you in the shin based on faulty reasoning or
understanding of what I admittedly poorly wrote. :) The point was that
Linked Data uses ontologies because, like you say, they're shared
vocabularies. Not the most complex vocabularies, of course, but
vocabularies or ontologies nevertheless. I doubt interchanging
"vocabulary" with "ontology" has the slightest effect on people's
understanding of how these things fit together, and *especially* not
the potential therein.

What I don't understand is that people have no problem understanding
names of elements in an XML schema, and link that and its data content
to records or fields in a database (which is a fuzzy undertaking when
you get right down to it), but have huge problems taking a triplet or
two and doing the same. There seem to be some cognitive mismatch
happening when you introduce the tiniest third directional signifier.
It's puzzling. Is the human brain too capable of doing one-to-one
mapping that it fails our attempts at many-to-any?

>> In other words, the technology, its promises and potential means
>> *nothing* when a small paradigm shift is needed.
>
> Despite my negative comments recently, I do think that paradigm shift
> is happening.

Where and how?


Regards,

Alex
-- 
 Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
--- http://shelter.nu/blog/ ----------------------------------------------
------------------ http://www.google.com/profiles/alexander.johannesen ---
Received on Saturday, 17 April 2010 10:31:13 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:41:21 UTC