Re: what is the meaning of the RDF model theory?

From: "Seth Russell" <seth@robustai.net>
Subject: Re: what is the meaning of the RDF model theory?
Date: Mon, 15 Oct 2001 04:06:12 -0700

> From: "Pat Hayes" <phayes@ai.uwf.edu>
> 
> > >Ok, I used the wrong word again.  The question I am trying to ask in the
> > >broadest terms is:  What difference will the MT make?.   It seems to me
> that
> > >the MT is supposed to tell us what a graph ~means~
> >
> > Say 'could mean', then yes.
> >
> > >and even provides an
> > >algorithm to determine that ~meaning~.
> >
> > NO! Interpretations need not be computable. (Some of them are, but
> > that's not the point.)
> >
> > >  But this ~interpretation thingy~ can
> > >never be manifested inside a computer (can it?),
> >
> > Some can, some can't.
> 
> Which is where you loose me:(     If we are making a theory that the
> computer can use, then, me thinks, being able to manifest the interpretation
> of it inside the computer is a *requirement*.   Allowing part of the model
> to be sustainable only by the ideals in a human's mind seems to me to be
> less useful.

Nope, sorry, wrong answer.  Next contestant please.  :-)

Consider the humble integers.  A theory of the integers would have several
operations, such as addition, whose interpretation is infinite and thus
cannot be manifested inside a computer.  Nevertheless, the theory of
integers is still quite useful.

Yes, this is not quite the point you were making but I think that it is
illustrative nonetheless.

> But I think there is one simple yet adequate ~model theory~.  Define an arc
> (or even a pencil of them which is an sexpression) down to it's fine detail
> such that it can be manifested in the computer.   Then something is in a
> model (entailed by it?) iff an arc exists in the model or can be inferred by
> the interpreter of the model.  The interpreter is just a program that
> operates only on arcs.

Sure, this is one way of proceeding, but how are you going to define
``inferred''?  You have several choices, one of which (and I happen to
think that it is the best one) is ..... a model theory.  

Another choice (that is often used, but that, I think is by far the worst
one) is to annoint a particular program as the definer of inference.
However, there are lots of problems with this approach.  First, a program
is a big thing, and for that reason, and others, is hard to analyze and
duplicate.  Second, the behaviour of programs is quite hard to define.  If
you want to define its behaviour via its ``behaviour in the field'' as it
where, you need to define the field, and I sure don't want to have to have
a definition of Windows XP as part of the definition of RDF!  If you want
to use the formal definition of the programming language, if it has one,
then you are back to either a very complicated formal operational semantics
or to a slightly less complicated .... model theory.  

So maybe you say that we should use a simple programming language.  OK,
let's try a very simple one, the lambda calculus.  (Why not, it's Turing
complete after all.)   Drat, the theory of the lambda calculus is suitable
as the sole subject of a upper-level mathematics course---not exactly
simple after all.  

> This is the first rough draft of this idea ... please don't laugh too loud
> if I've misused some words.

No problem with the wording, just with the foundations of your idea.  If
you want to define something you should try to do it using the simplest
tools that are adequate, not something much more complicated, and programs
are exceedingly complicated.  If they weren't we wouldn't need
documentation for them (and I'm including meaningful names for variables as
documentation).

> Seth Russell

Peter F. Patel-Schneider
Bell Labs Research

Received on Monday, 15 October 2001 07:40:04 UTC