Re: connections

On 4/17/2010 6:30 AM, Alexander Johannesen wrote:
> Hola,
>
> Danny Ayers<danny.ayers@gmail.com>  wrote:
>>>> If we had compelling enough applications of the *data*, wouldn't we build
>>>> the tools we need?
>>>
>>> Why?
>>
>> Because I want to know where the nearest kennels are, and when will be
>> best to plant tomatoes.
>
> No, no; why is there some automatic notion that if our data is
> compelling enough, the tools we need will be created? I'm not
> questioning the need nor want for compelling applications, only the
> assumption that once we get to stage 1, stage 2 will automatically
> follow. We're all looking for that killer application, but perhaps
> we're mistaking the killer app for techies for the killer app for the
> real-world?

*shrug* I suppose it's not an ironclad conclusion, but I'm a pretty big 
believer that if there is a compelling use of the data (not compelling 
data, per se) that we know about, someone will do the leg work to build 
that application. particularly if it's a monetizable application.

>> Google do seem to have noticed that the hocus pocus (whether or not
>> they call it RDF) has its place.
>
> I was more pointing to RDF being the culprit. When Google wants to buy
> a few million bibliographic records, do they embrace MARC, MARC XML,
> MODS, MADS, or RDF versions of the same? Nope, they create some simple
> XML to represent the very basics they feel they need, and use that.
> Same with most of the RDF data; silo mentality of the value of
> datasets are incredibly hard to evaluate in the Linked Data world; you
> have to take on good faith that the quality within is good enough for
> whatever killer app you're writing. And quite often you only discover
> lacks and poor data quality once you've gone down the path of
> development for a while, never a pleasant journey. Are you expecting
> killer apps based on data with faith-based quality control, and big
> hurdles for evaluation of value?
>
>>> The Semantic Web was crafted on the potential of fixing problems a tad
>>> bit better than what we already had that already fixed the problems,
>>
>> I disagree somewhat - would take me a while to find the exact quote,
>> but Tim has stated words to the effect that the semweb can make
>> problems previously considered impossible become a bit obvious. (A
>> point with which I agree strongly).
>
> You are of course right, but all of that is theory. In practise we are
> rehashing old problems in new ways. I guess what you're longing for is
> the tipping point of going from solving those problems to solving new
> ones.

I think that's a good summary. Personally, I embrace the family of W3C 
Semantic Web technologies particularly because I find many of them to be 
a standardized form of what I'd otherwise consider best practices for 
solving problems anyway -- so I'm more in line with your original 
observation... I just happen to be happy enough to be solving problems a 
bit better than I would otherwise. (And for some classes of problems, 
I've seen "a bit" be big enough to be the tipping point between feasible 
and infeasible -- I'm not sure I've seen that with Linked Open Data though.)

>>> so basically fixing a non-existent problem. It was also built on the
>>> promise of reusable ontologies on top of data, and even though the
>>> promise wasn't held the potential is still there, for sure. But we
>>> haven't got the tools to deal with that part of it all that took us
>>> (speaking in generic fuzzy terms here) by surprise;
>>
>> But we (in the affluent West at least) each have the hardware,
>> software and connectivity to put us in the zone of making real use of
>> this stuff. I still don't understand why we are so slow at making it
>> so.
>
> Because we suck at coming up with good ideas, and even worse at
> throwing something together to prove a point. If this stuff was easy,
> we probably would see tons of it. But we don't, and I suspect that the
> tooling sucks in a sense that it is hard for people in the real-world
> to wrap their heads around them. SGML was brilliant, but hard to fully
> grasp. And we know who's your generic markup daddy.
>
>> "informolasses" goes straight into my vocab, thanks.
>
> You heard it here first. :)
>
>> I suspect you're right about domain-specific tools, that reflects the
>> human issues, the need to solve specific problems.
>> While the Web of docs can be very generalist, I'm not so sure the Web
>> of (linked) data will be useful in the same way, at least in the near
>> term.
>> For example, when I'm in gardening mode, I want a gardening
>> application - that uses global data but within a locale filter.
>
> I have tons of similar problems. Even online tools I know how to use
> and hack and exploit can sometimes draw up a blank. Like finding a
> Guinea Pig breeder on the south coast of Sydney when you need one; 1)
> there might not actually be any, or 2) there is no information about
> them on the web to be crawled. The problem is not that they haven't
> published their details in glorious Turtle.
>
> But is this stuff really the same problem as the Linked Data and lack
> of killer apps, though?

Good question. As Danny observed, the examples of compelling 
applications (I shy away from "killer app" simply because it implies to 
me that there is only one--unless you're a cat, I suppose) that have 
been mentioned so far all have to do with applying local geo data to a 
broader information base. Are there compelling uses of Linked Data that 
don't fall into that category? (Similar to the fact that all good Web 
2.0 mashups involve a map, I suppose.) Anyway, just asking questions 
here, I don't know the answer(s).

Lee


>>> All this data and their weak relationships are great to play with,
>>> though, and it might shape things to come, but to get the masses to do
>>> something interesting with it you need to convince them that
>>> "ontology" is even a word that deserves a place in our daily
>>> languages. (And don't tell me linked data doesn't need ontologies; a
>>> kick in the shin if you do) Tough call, I'd say. If you say to them
>>> "model", they immediately reach for Toad or some RDBMS thingy. If you
>>> say "triplet" or, even worse, "tuple", they might think you're talking
>>> about raising kids.
>>
>> Kick me in the shin - ontologies are no more and no less than shared
>> vocabularies through which we can communicate.
>
> I can't kick you in the shin based on faulty reasoning or
> understanding of what I admittedly poorly wrote. :) The point was that
> Linked Data uses ontologies because, like you say, they're shared
> vocabularies. Not the most complex vocabularies, of course, but
> vocabularies or ontologies nevertheless. I doubt interchanging
> "vocabulary" with "ontology" has the slightest effect on people's
> understanding of how these things fit together, and *especially* not
> the potential therein.
>
> What I don't understand is that people have no problem understanding
> names of elements in an XML schema, and link that and its data content
> to records or fields in a database (which is a fuzzy undertaking when
> you get right down to it), but have huge problems taking a triplet or
> two and doing the same. There seem to be some cognitive mismatch
> happening when you introduce the tiniest third directional signifier.
> It's puzzling. Is the human brain too capable of doing one-to-one
> mapping that it fails our attempts at many-to-any?
>
>>> In other words, the technology, its promises and potential means
>>> *nothing* when a small paradigm shift is needed.
>>
>> Despite my negative comments recently, I do think that paradigm shift
>> is happening.
>
> Where and how?
>
>
> Regards,
>
> Alex

Received on Saturday, 17 April 2010 13:12:48 UTC