Re: A Single Foundational Logic for the Semantic Web

> Sandro Hawke writes:
> > > [Pat Hayes]
> > > None of this [stuff about programming] has anything to do with what
> > > the RDF/DAML/OWL development effort is about, seems to me.
> > 
> > That statement is both outrageous and totally understandable.  We're
> > arguing with some pretty ambiguous terms here.   I'll try to be more
> > precise; stop me when go wrong.  (like I have to say that....)

Perhaps I should have said: Pat, you're right that the DAML/OWL effort
is not about programming; that's why I put them at Layer 3 [1].  RDF,
however, is going to be very useful for the programming side of
things.  (Some OWLers might think it's best left entirely for the
programming side of things, I suppose.)  My good-but-imperfect
understanding of the Semantic Web effort suggests that programming is
important, but let me spell that out more clearly below.

Peter F. Patel-Schneider writes:
> To me, Pat has hit the nail squarely on the head here (and, conversely,
> Sandro is making no sense to me).
> 
> If what you want is a univeral computational mechanism, and, moreover, one
> that no-one will read directly, then any Turing-complete computational
> mechanism will work, be it Post production systems, the lambda calculus,
> the Java virtual machine (or whatever it is called), or even deduction in
> first-order logic.  Taken in another way, if this *is* what you want, then
> there is absolutely no reason whatsoever to use deduction in some logic
> that can encode the operations of Turing machines.  You may as well use a
> nice (for both humans and non-humans, if this is possible) programming
> language.  Computer scientists, and, especially, designers of programming
> languages, spend a lot of time on this issue.
>
> If, however, you want to represent information, and, perhaps, even transmit
> that information to other computational devices (including both human and
> non-human computational devices), then you are in a very different world.
> In this better world, computational adequacy is no longer the metric to
> use.  Instead some version of representational adequacy is much preferable,
> tangled up with computational issues.  Logicians, and, hopefully, designers
> of knowledge representation formalisms, spend a lot of time on these issues.

I think both worlds ought to exist on the same Semantic Web.  What we
might call the "programmable" Semantic Web (layer 2) will provide a
way for many flexible knowledge representations formalisms (in layer
3) to be widely deployed and usable.  Without it, such formalism will
see much less use.  The basic information formalism of RDF (layer 1)
is good enough to layer 2.

From your characterization of the programmable Semantic Web, and the
divide between the two, I clearly haven't explained it well.  In
particular, I haven't explained why it should be programmed using any
non-traditional techniques of have any KR slant.  I'm afraid I have to
start at the beginning....

In my view, the Semantic Web is a global distributed database with no
guarantee of system-wide coherence.  It works by information providers
making available serializations of RDF statements, tagged as being
believed by some party at some point in time.  Information consumers
receive or harvest these tagged serializations and use them to make
decisions (taking into account the author and timestamp information).
Many parties will be both provider and consumer, of course.

One particularly interesting kind of provider-and-consumer lives in a
little box, interacting only with the Semantic Web, gathering
statements and issuing new ones.  This box is rather like the
state-machine part of a Turing machine, only instead of reading and
modifying an infinite tape, it reads and modifies an RDF Graph.

One can imagine many kinds of boxes like this, perhaps performing RDFS
inference, computing the number for the final box in a mortgage
interest calculation, or deciding whether to grant read-access to some
web page.  Each of these is a computer program which looks at SemWeb
data and adds more.  They view the SemWeb as a database, and they
participate the same as everyone else: they release serializations of
RDF statements, tagged as believed by them at some point in time.

It would be very convenient if these boxes didn't have to be servers
running in a warehouse somewhere.  That might work, if they had a good
way to find out new facts so they could add their output quickly, but
we'd really like them to behave rather differently.  We'd like them to
(1) be virtualized and mobile, instantiated in software near any user
who needs the information they can provide, and (2) only provide the
data needed to answer queries, not fill the web with unwanted (but
true) conclusions.

The first goal can be met by building a "universal" box, one which
reads from the Semantic Web not only the input for some other
("virtual") box, but also a sufficient description of that box.  Given
these, it emulates the virtual box and produces the same output.
Everyone can have at least one of these universal boxes running on
their computer or a nearby server.  This universal/virtual step may
look bizarre to some people, but of course it's the same as imagining
a universal Turing machine or building a [ gasp!  :-) ] *programmable*
computer.

For the second problem, it helps to think of boxes-generating-
statements as inference rules firing.  Letting them run at will is
like forward chaining, and we solve the unwanted-information-glut
problem by using them in backward chaining.  When we ask, "What number
is in this last box of my mortgage interest calculation?" our nearby
universal box finds and instantiates the virtual box that can answer
our question and runs it just long enough to do so.

Of course the input to one virtual box might only be generated by
another virtual box.  If you chain a lot of them together, and break
each complex box down into very simple boxes, it sounds a lot like
logic programming.

I'm not saying we have to use Prolog, though.  When a virtual box
wants to do something very complicated, perhaps it would be more
efficient to provide some machine code which has the same effect;
that's fine with me.  It might also be easier for some people to use a
procedural lanuage to express the behavior of the box they want;
that's also fine with me.  We can translate, or perhaps deploy a
procedure-oriented universal box.

At this point I'm only arguing that building a universal box is
possible and useful.  Later, we can try to build one that's fast and
easy to use.   (Later, but the sooner the better, of course.  There
may not be many deployment windows.)

It may turn out that a Java virtual machine, with some installed
chaining and SemWeb access framework is an excellent universal box,
but I think building one based on Prolog is probably a lot easier.
(The Java approach is absolutely workable though.  I worked in that
space for years before deciding a Prolog approach was easier.
Javaspaces [2] and the Observer pattern [3] are steps in this
direction.)

I hope you now understand better where I'm coming from.  I don't know
if you care about the design of the universal box. I don't know if you
agree with my assertion (which I haven't yet tried to justify) that
the universal box has a better chance of wide deployment than many
logics.  It doesn't matter all that much if we go separate ways for
now.  (I can imagine conflicts in RDF Core, like Dark Triples.   We'll
see.)  I am still very interested in your ongoing critique of my
proposed formal semantics for the universal box (in a neighboring
thread), for which I hope to provide new grist shortly.

    -- sandro

[1] http://lists.w3.org/Archives/Public/www-rdf-logic/2002Apr/0045.html
[2] http://java.sun.com/products/javaspaces/
[3] http://c2.com/cgi/wiki?ObserverPattern

Received on Wednesday, 1 May 2002 12:10:04 UTC