W3C home > Mailing lists > Public > www-rdf-rules@w3.org > September 2001

Re: What is an RDF Query?

From: pat hayes <phayes@ai.uwf.edu>
Date: Tue, 11 Sep 2001 18:40:38 -0500
Message-Id: <v04210100b7c448500bf7@[]>
To: Sandro Hawke <sandro@w3.org>
Cc: "Peter F. Patel-Schneider" <pfps@research.bell-labs.com>, www-rdf-rules@w3.org
> > > Rather that go into a lengthy reply, can I just say "layering" and
> > > point out that I said "LISP syntax" (atoms and dotted pairs) not "the
> > > LISP programming language" (with lambda and everything).  Is that
> > > enough?  RDF clearly can't talk about the color of my dog without us
> > > defining some terms (that is, extending the language), and the same
> > > goes for talking about rules, queries, ontologies, schemas, and (if
> > > we're as pure as I think we should be in defining the bottom layer)
> > > bags, sequences, statements, and types.
> > >
> > >     -- sandro
> >
> > Certainly, you can say ``layering'' and ``LISP syntax'', but that doesn't
> > explain how it would work.
> >
> > The beauty of LISP is two-fold: 1/ a simple syntax, and 2/ an elegant
> > programming language.  If you take away the programming language, then the
> > syntax is not nearly as useful.
>I would love to have at least Horn logic in "standard" RDF, but I'm
>guessing it's not going to happen.  So we have layered standards.  One
>standard specifies the simple syntax, and some others, layered on top of
>it, specify how to convey information about particular domains in that
>base syntax.

That doesn't seem to make sense. The syntax itself conveys no 
information at all; any language conveys information by virtue of its 
semantics. What is the sense of 'layering' that allows the base 
syntax to convey more than can be said in its own 'base' semantic 
theory? If you are writing RDF, what you mean is what is specified by 
the RDF semantics.  If you want to say more than that, you have to 
use some other language (which may be an extension of RDF, in the 
sense that FOL is an extension of propositional logic; but it is 
still a different language.). No amount of 'layering' is going to 
make the RDF syntax mean anything more than it means already.  Even 
if you manage, by some notational miracle, to 'use' the RDF syntax or 
data model to encode your new language, the result still isn't RDF; 
it just *looks* like RDF, but it *means* something different. That 
introduces ambiguity deliberately as a design strategy, which seems 
like a bad idea to me.

>What's interesting is how the upper layers can be developed
>independently and still be (mostly) interoperable.
>Pantone might define an ontology for colors, and the AKC might develop
>one for breeds of dogs, and I can publish RDF information saying my
>dog is an Akita (a term from the AKC vocabulary) who is Black and
>White (terms from the Pantone vocabulary).

You are now talking about a different matter, which is combining 
concepts from several different *ontologies*, ie different 'knowledge 
bases'. But the very fact that you can combine them presumes that 
they are written in the same *language*, or at least in compatible 
languages. If one is written in, say, XML and the other in, say, KIF, 
then what you say is wrong: you *couldn't* publish information about 
a black/white Akita, since there is no language to say that in (until 
someone invents a blend of XML and KIF). No parser could make sense 
of it.

>Of course, in LISP you can load libraries from different providers and
>generally use the functions on the same data structures, etc.  This
>kind of modularity is not new or terribly special.   RDF is more
>draconian than most programming language systems in keeping identifier
>spaces from colliding and in saying the "core" has very, very little
>functionality.   (But then, what can you write in C without using any
>libraries?   Can you do any I/O?)

Sorry to shout, but the point seems to need emphasizing.

>It seems very natural to put the vocabularies for things like colors
>and breeds of dogs "above" the language, and I think it's a good goal
>for world-wide standards to put as much as possible above the

I have no idea what you are talking about. Can you expand on this 
idea of "above" ?

> I would not have chosen to put logic (and numbers) above
>the language, and I'm still wondering what kind of a system we can
>really build if we do so.
>There are so many usage scenarios.....
>I want to be able to say "a" and "a implies b" and know that the
>receiving agent will infer "b".

Interesting point. I don't think that you can possibly know that they 
WILL make any inference, no matter how obvious it is. What you should 
be able to say is that they COULD make that inference.

> If we can't know that with some kind
>of likelyhood, maybe there's not much point to it, and we should just
>say "a" and "b" in the first place.
>Anyway, we can pick a few ontologies, including ones for numbers and
>first-order-logic (and maybe description logics and DAML research
>groups :-) and call that "The DAML Standard Set of RDF Ontologies
>v0.1" (or just "DAML") and hope that's what ends up being implemented

That doesn't make sense. An ontology is a set of assertions *in a 
language*. What language are you going to write the 'ontology for 
first-order logic' in? And what would it *say* about first-order 
logic, in any case? (Would it define the model theory? )

> > The situation is much different with less-expressive representational
> > systems, like RDF.  In such systems, there is no possibility of
> > implementing proof theories within the system itself.
>I don't know what "implementing proof theories" means, though I've
>heard the term a few times.  I'd be grateful for a brief explanation
>and/or pointers.   (I suspect I have the notion without quite knowing
>the term being used for it.)

Read it as meaning 'implementing inference systems'. CWM would be an example.

> >                                                  (If, however, you are
> > proposing an extension to RDF, that would be different.  Of course, an
> > extension needs a lot more than just a syntax.)
>To be clear to a fault: I don't believe anyone can do anything in RDF
>without defining new terms out-of-band.  (Dan Connolly suggests on irc
>two useless exceptions: "deduce A from (and A B), and deduce (exists
>(?x) (p ?x o)) from (p s o)".)
>Defining new terms in RDF is identical to extending the language.

I dont think it is. First, it is impossible to define new terms in 
RDF; but it is possible to define new terms in RDFS (in a rather weak 
sense of 'define'), and that is not an extension to RDFS (at least in 
the sense we are talking about here.)

>Alas, I'm not qualified to do more than wonder about the theory mess
>this creates.
>If we had a more expressive language, we could define some terms
>inside the language, which would be nice, but we can't.

Surely the moral should be, that we should use a more expressive 
language. There are plenty available. (Exactly how expressive is 
admittedly a matter for reasonable debate, but the sweet spot for 
machine utility and human usefulness is almost certainly nearer to 
DAML+OIL than to RDF, no matter what your criteria are. If you are 
chiefly interested in proof-checking rather than proof generation, as 
Tim B-L seems to be, then the sweet spot is probably the other side 
of FOL, somewhere in type theory.)

>The question
>is, would it really matter, since we could never define colors and
>breeds of dogs in the language, and that's what we really want to talk
>about anyway?

We can't *define* them in the sense of giving necessary and 
sufficient conditions, but we can *describe* them well enough to do 
some useful inferencing. Really, we can, nothing exotic or 
challenging about it, it's old technology, widely deployed. All that 
needs to happen is that the W3C needs to actually learn about this 
stuff instead of re-inventing wheels with their spokes missing.

>(Again, there are so many different things people want to do with the
>semantic web.....   Sorry for rambling.)

Fine. Sorry for being so direct in my replies.


(650)859 6569 w
(650)494 3973 h (until September)
Received on Tuesday, 11 September 2001 19:40:43 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 March 2016 11:10:13 UTC