Re: Interpretation of RDF reification

I find this very interesting, but also a bit worrying.
i. I find it incredibly interesting because, many years ago (twenty,
perhaps) I shared my flat with someone who was studying Grice. I was
interested enough to make sure I had a copy of every paper he had written,
some are a bit obscure. My friend pointed out at the time that he was an off
the beaten track figure, and so it seems he remained, at least until
recently. I don't mean to imply this is where I think his work should be,
far from it. I know how ambiguous email communication can be, but I have
always been intrigued by his work. I thought there wasn't a formalism
capable of capturing his ideas sufficiently for machine exchanges.
ii. I have read, and I think treasured, Knowledge Representation. And,
indeed, there is a bibliographical reference to Grice in it.
iii. But not finding Grice mentioned in the SemWeb efforts I had assumed it
was either irrelevant or subsumed in this effort. If not impossible to
incorporate then it was more of the former.
iv. I understand that things can go in circles in the AI world, as just
implicitly mentioned in John's post, but people in the Uni Department I used
to work in (Greenwich University) said much the same. I suppose that
basically a good idea may not have been fully fleshed out and different
implementations have implications for its viability.
v. This is the worrying bit. John has said that

> RDF and OWL are too limited, clumsy, and inefficient to support
> any serious work in knowledge representation and reasoning.
>
I don't know if this is true. I don't know what constitutes "serious work",
although John hints that large models make Protege choke, and I assume it is
the reasoning chains in the model rather than the number of elements per se,
that are the choke points.
I don't know what constitutes "serious work" because use cases are so thin
on the ground.
Now, to expand on my concerns, where there is a well articulated use case
(in the sense of how to use it, not whether it will be used, more on that in
a bit) that of work flow modelling, there is an example in a well funded EU
project working in this area, WSMO, but the project uses another specialised
language that OWL can be translated into, not OWL.
This may be important in that I had thought that the reason for RDF and OWL
was not so much to achieve what couldn't be achieved by other means, so much
as to achieve this in an open language where what would be key is the extent
of that adoption of this language.
Just before I go further, perhaps there are some flawed assumptions here.
Once again, I had thought that the analogy might be with the rise of Java. I
thought it was seen that although Smalltalk is like Java, the desirable
outcome would be to follow a similar trajectory to Java. One might say that
Smalltalk is "better" than Java, but failed to gain acceptance due to its
marketing. Although obviously not Open Source, Java was made completely (or
sufficiently) available, to gain wide adoption. So, to gain acceptance, open
source transparency and availability is a good thing. But this does
presuppose that the language in question will cut the mustard otherwise. (I
know there are other precedents, in particular and notably the Apache Web
Server, but you get the idea.)
So, perhaps the flawed assumption is that there should be just one language
or language set that does for the SemWeb?
vi. However, judging from the level of activity on the mailing lists for
WSMO related issues, this, at least has a long way to go before any sort of
wide acceptance. I am not sure that this is because of the limitations of
OWL (or its own variant). Nor, even, the obvious fact that a further
language fragments the potential user group. This may not apply here anyway.
What I think is happening is that the technology has not shown itself to be
sufficiently compelling as yet.
v. So how does a technology prove itself in this way? There is a tension
between what can be demonstrated and what potential users are prepared to
contemplate by way of adoption. This is complicated by a number of things.
What are we trying to do here? It can't be to just promote a single language
type solution but rather the ability to do various sorts of reasoning across
disparate data pools, to determine the preferred design for those pools, to
cope with non-conformant pools and to offer an open means of achieving these
ends (this list isn't intended to be comprehensive).
But the thought remains that there should be a single language for this
since this reduces duplication of effort.
Again, a compelling application might be persuasive, but then that
application would have to be used to be compelling, that is have a real user
base. And there we are looking at another area of complication.
For instance the database that Dan wishes for has been implemented in
several forms for RDF. These are already compelling applications, although
not enough to make a semantic application, as the schema, data and queries
are also required.
So, in sum, have I narrowed it down? Is the issue that were a more
expressive language used, there would be more in the user community at work
on schema, data and queries utilising that language?
Is there a particular application that would show the difference between the
two languages and prove a compelling case for would be adopters?
Is there sufficient regard paid to a distinction between different types of
possible SemWeb applications? I have mentioned one and several others are
mentioned on this list as well as in this thread. In particular there seems
to be a sharp distinction between, say, a desktop application that relies on
markup to decorate underlying content and P2P to discover info nuggets and
building a large and comprehensive ontology in the field of medical
discovery. Or again, as i mentioned, an ontology of process that
comprehensively handles workflow.
Why should it be the same language in all cases?
Is it just to do with a paucity of alternative tools and the desire not to
duplicate effort?
But if the tools are for the "wrong" language this simply isn't good enough,
is it?
This in turn will have implications for funding efforts and hopes of success
for different projects, so I think the issues should be considered very
seriously.
Sincerely,
Adam Saltiel
On 26/03/06, Dan Brickley <danbri@danbri.org> wrote:
>
>
> * John F. Sowa <sowa@bestweb.net> [2006-03-26 09:03-0800]
> > Dan,
> >
> > Common Logic is still in the FCD stage at ISO.  Some
> > people are developing tools based on it, but I don't
> > know of any that are commercially available right now.
> >
> > DB> Sounds good to me. Where can I down load a Common Logic
> > > database to play with? I've a few legacy files I'd like
> > > to import...
> >
> > However, the Common Logic Interchange Format (CLIF) has
> > a large overlap with the Knowledge Interchange Format (KIF),
> > which has been widely used since the mid 1990s.  Almost
> > anything done in KIF can be moved over to CLIF with minor
> > modifications (and some with no modification at all).
> >
> > I know a couple of groups who have used Prolog to translate
> > RDF and OWL into other formats.  At VivoMind, we have used
> > Prolog to handle very large RDF and OWL files that cause
> > Protege to choke.  Prolog is lightening fast for reading
> > such files and converting them to other formats (such as
> > CLIF and CGIF).  Our tools for using the results, however,
> > are still in alpha and beta stages.
> >
> > One of the primary tools we have been developing at VivoMind
> > is the Flexible Modular Framework (FMF).  See the following
> > paper for the basic idea:
> >
> >    http://www.jfsowa.com/pubs/arch.htm
> >
> > Our plans are to make an open source version of the FMF
> > freely available and to make a business of developing and
> > licensing modules to plug into the FMF.  However, we are
> > still working on the commercial version.
>
> Thanks for the pointers, I'll take a look around.
>
> It sounds like we're at an awkward stage; RDF and OWL are
> 'legacy' (from your perspective) but the tools to properly exploit
> CLIF aren't quite ready to hit the mainstream yet (opensource or
> not). I appreciate that KIF got some traction, but it didn't win
> enough hearts, minds and budgets to stop XML taking over the world. Is
> CLIF expected to occupy a bigger niche?
>
> Having FMF opensourced sounds like a good step towards bridging
> that gap...
>
> >
> > The short answer to your question is that I don't know
> > anybody who develops stuff to play with, and the people
> > I do know are up to their ears in work from paying
> > customers.
>
> Ah, I think Topic Maps suffered from that too. All work and no play...
>
> If Common Logic is to go mainstream, and to really make RDF and OWL
> (and XML?) obsolete, getting multiple opensource toolkits out there will
> be a
> big part of that. If a would-be paying customer says to me "hey,
> I heard RDF is kinda 70s, shouldn't we be using Common Logic instead
> of RDF, OWL and SPARQL"? While I might agree in the abstract, the
> tool and market situation doesn't quite there yet. Well, it depends
> what the problem is. If the problem isn't heavily tied up with
> wide-area data sharing, CL tools might still solve the problem better
> than RDF/OWL ones, even if they're relatively obscure.
>
> I'm entirely comfortable with the idea that CL may turn out to be the next
> great leap forward, but there's got to be work from someone on getting
> tools
> into the playful and creative hands of ordinary tech developers if that's
> going
> to happen. And you're right; paying for that effort isn't going to happen
> by magic...
>
> Dan
>
>
> > John
>
>

Received on Sunday, 26 March 2006 20:55:52 UTC