Re: A Single Foundational Logic for the Semantic Web

> >-----Original Message-----
> >From: Sandro Hawke [mailto:sandro@w3.org]
> >Sent: Monday, 29 April, 2002 21:39
> >To: Pat Hayes
> >Cc: www-rdf-logic@w3.org
> >Subject: Re: A Single Foundational Logic for the Semantic Web 
> >
> >
> >> >At best, certain communities can agree on some logic
> >> >(eg DAML+OIL) and their software can interoperate.  In some cases,
> >> >mappings can be made between logics, but it's probably wishful
> >> >thinking to expect much from that.)
> >> 
> >> No, I think that is an exact mirror of the human condition, and 
> >> inevitable in global forum. People talk different languages, but 
> >> manage to get by using patchy translations.
> >
> >Well, no.  Actually, pretty much everybody on the web speaks the same
> >IP, TCP, HTTP, and HTML.  It's amazing.
> >
> >If you want global coherence on the human experience, yes, of course
> >you're right.  Some of us only want global coherence among our
> >computerized information systems, which is perhaps a more modest goal.
> >
> 
> But the point of the Semantic Web is precisely to capture the *human
> experience* in formalised semantics, certainly not a modest goal, which
> is why it is so proving to be so difficult.

Hm, I don't think it's possible to "capture the human experience".  I
think at best you can express knowledge using some language which you
assume others to understand-as-you-do.  When you see others behave as
if they understood your expressions of knowledge, you gain confidence
in your assumption.  The "others" you observe are both people and
machines.  When you write a computer program and it does what you
expected, you start to think you can write in a language the computer
can understanding.

For the Semantic Web, we'll try to leverage off a lot of natural
language (including this e-mail I suppose) and mathematics techniques
to jump-start the process of getting people and software systems to
use nearly-the-same languages.

The goal is to get knowledge from many people's minds, through many
computers, to many other people's minds & computers.  If you want to
offer your car for sale on the Semantic Web, there should be a
language (ontology) for talking about cars and things for sale, so
when people ask about a car for sale, or car sales in general, or
sales in general, or you, or your car, etc, etc, they have a much
better chance of getting useful knowledge than they do on the current
web.  (Searching google for "ziv@unicorn.com car sale" might work if I
knew that's what I was looking for, but the information on HTML pages
cannot be directly aggregated to tell me about all car sales in an
area, etc.)

> Ontologies and assertions
> are going to contain a tremendous amount of comprehension/cultural
> assumptions, just as human languages do, and we are going to end up
> relying on patchy translations. Yes, everyone on the Web speaks the same
> HTML (well, not quite, but let's ignore that for the moment) and that
> works because the semantics of HTML are so limited and so very fixed --
> all they do is tell you how text should appear on a page. The Semantic
> Web vision is to take a leap to an entirely different dimension, in
> which both syntax and semantics will become variable parameters
> controlled by users whenever they compose ontologies. This is
> incomparable to HTML et al.

Agreed.

> 
> > >So the layering looks like this:
> > >
> > >    Layer 3: Any logic for which an effective inference procedure is
> known
> > >    Layer 2: A Turing-Equivalent Logic (such as TimBL's swap/log [1])
> > >    Layer 1: RDF (Pat's MT, more or less)
> > >
> 
> [large amount of text snipped out here]
> 
> >
> >I would also argue with your phrasing, "All the reasoning engines
> >would work fine."  Yes, all the reasoning engines would conform to
> >their formal specifications, but they would of course not work "fine"
> >in the sense any decent programmer or paying customer would use the
> >word.  Rather they would conform to a lousy incompletely-thought-out
> >specification.  Lousy specifications are not new, and thinking things
> >out completely is often impossible [halting problem!].  Much of the
> >software development process is about debugging the specification
> >itself (if you even have one) and turning it into something which can
> >be implemented as a useful system.  The paradox red flags are people
> >saying "you'll never be able to implement this (as a useful system)"
> >which is wonderfully helpful at this early stage, if they're right.
> >
> >In my layering scheme, paradoxes in a layer 3 logic would lead one to
> >be unable to write a correct/useful layer 2 inference procedure.  I
> >imagine the actual failure modes might vary in frequency and severity,
> >like many other software bugs.  Certainly if you knew the logic had a
> >paradox, you'd want to steer your billions of dollars per second far
> >away from it, but you might still play chess against programs using it.
> >
> >     -- sandro
> 
> No. Again, a false analogy with familiar computer-programming situations
> is being asserted here. You make it seem as if paradoxes within a
> logical system are the equivalent of bugs in a software program. And we
> all know that bugs are unpleasant, can cause expensive losses, should be
> vigilantly guarded against, etc etc yada-yada-yada, but we also know
> that in real life situations work-arounds can easily be found for most
> of them and they can be isolated/ignored to some extent, because a bug
> in one part of a program usually does not imply that other parts/outputs
> of the program are problematic. What you are saying above is that
> logical paradoxes have the same status. But they do not -- this is an
> entirely different kettle of fish.
> 
> To illustrate the point, consider the famous story surrounding the
> Russell paradox. Legend has it that Frege laboured for years to bring
> the set theory he was promulgating to the point where he felt confident
> enough to publish it in a massive tome. Russell, reading through an
> advanced copy of the manuscript, discovered the paradox bearing his name
> and promptly informed Frege of this. Now, did Frege react to Russell's
> paradox by saying 'Oops, that is a nasty bug, but I will publish the
> theory anyway with a bug warning in an appendix for hackers to work
> around until I can come up with a patch.'? No. He stopped the presses
> and repudiated the entire theory. That is because a logical paradox is
> not like a bug in a program -- even one logical paradox in a logical
> system is sufficient to bring the entire thing to a crashing halt, and
> one must toss it all away. 
> 
> So your statement that 'actual failure modes might vary in frequency and
> severity, like many other software bugs.  Certainly if you knew the
> logic had a paradox, you'd want to steer your billions of dollars per
> second far away from it, but you might still play chess against programs
> using it' is extremely wrong. The actual failure modes would not vary --
> they would reliably produce rubbish at every step. Every single
> conclusion of the system would be worthless because both it and its
> negation would be valid conclusions -- a chess playing program that told
> one to move a pawn forward and to not move it, simultaneously, would not
> be worthy of even being called a chess playing program. The paradox red
> flags are not saying 'proceed with caution and try to avoid pitfalls'.
> They are saying 'your entire system is crashing down to rubble around
> you'.

My arguement was directed at Pat because he once justified model
theories to me as allowing fruitful arguments about the behavior of
reasoners.  In that vein, the model theory is just a part of a program
specification....

I'm familiar with the Frege/Russell story.  The problem with your
analogy is that Frege was trying to build a system of much more
fragile stuff than a computer programming language (which seems hard
to believe!).  As you point out, in his system (as in many logics), if
you can construct a paradox in the system, the system is formally
useless.  I wouldn't argue that.

I am arguing that in the real world, where no communication can ever
be known to be perfectly correct, expressions in an inconsistent logic 
may in fact communicate useful information to a receiver.   The
receiver, as I argued above, cannot even be known to be using the same
language; at best you can demonstrate that it seems to be using the
same language.  

N3 with swap/log as implemented by cwm can express paradoxes ("this
log:notIncludes this", and many others), and yet people use it every
day (or at least 3 times a week :-) to get real work done.  Maybe this
rubble around us is all Hollywood foam rocks, which don't really hurt?

I think the proper conclusion is that cwm doesn't really understand
swap/log; it understands a very similar language which is a lot harder
to specify (the source code is fairly long).  To most experiments, the
two language are the same, but...  Well, I just told cwm to deduce
everything it could from several paradoxical constructs and in each
case it deduced either nothing new or it gave an error.  That's what
you'd expect from a computer program written in a certain pragmatic
style.  I guess that means cwm understand some consistent language
other than swap/log.

If cwm were to generate the universe from a paradox, that still
wouldn't leave us crushed under the rubble.  We could still get our
work done.   We'd just have to be careful around that paradox or
anyone who could have stuck a paradox into the system.

In other words, in the world I live in, your crashing rubble is no
more than an annoyance.  (I *really* don't mean any disrespect by
that; the imagined worlds where paradoxes cause problems are
fascinating and very useful, but in other very important worlds
paradoxes remain curiosities.)


     -- sandro

Received on Wednesday, 1 May 2002 12:56:25 UTC