RE: A Single Foundational Logic for the Semantic Web

>-----Original Message-----
>From: Sandro Hawke [mailto:sandro@w3.org]
>Sent: Monday, 29 April, 2002 21:39
>To: Pat Hayes
>Cc: www-rdf-logic@w3.org
>Subject: Re: A Single Foundational Logic for the Semantic Web 
>
>
>> >At best, certain communities can agree on some logic
>> >(eg DAML+OIL) and their software can interoperate.  In some cases,
>> >mappings can be made between logics, but it's probably wishful
>> >thinking to expect much from that.)
>> 
>> No, I think that is an exact mirror of the human condition, and 
>> inevitable in global forum. People talk different languages, but 
>> manage to get by using patchy translations.
>
>Well, no.  Actually, pretty much everybody on the web speaks the same
>IP, TCP, HTTP, and HTML.  It's amazing.
>
>If you want global coherence on the human experience, yes, of course
>you're right.  Some of us only want global coherence among our
>computerized information systems, which is perhaps a more modest goal.
>

But the point of the Semantic Web is precisely to capture the *human
experience* in formalised semantics, certainly not a modest goal, which
is why it is so proving to be so difficult. Ontologies and assertions
are going to contain a tremendous amount of comprehension/cultural
assumptions, just as human languages do, and we are going to end up
relying on patchy translations. Yes, everyone on the Web speaks the same
HTML (well, not quite, but let's ignore that for the moment) and that
works because the semantics of HTML are so limited and so very fixed --
all they do is tell you how text should appear on a page. The Semantic
Web vision is to take a leap to an entirely different dimension, in
which both syntax and semantics will become variable parameters
controlled by users whenever they compose ontologies. This is
incomparable to HTML et al.


> >So the layering looks like this:
> >
> >    Layer 3: Any logic for which an effective inference procedure is
known
> >    Layer 2: A Turing-Equivalent Logic (such as TimBL's swap/log [1])
> >    Layer 1: RDF (Pat's MT, more or less)
> >

[large amount of text snipped out here]

>
>I would also argue with your phrasing, "All the reasoning engines
>would work fine."  Yes, all the reasoning engines would conform to
>their formal specifications, but they would of course not work "fine"
>in the sense any decent programmer or paying customer would use the
>word.  Rather they would conform to a lousy incompletely-thought-out
>specification.  Lousy specifications are not new, and thinking things
>out completely is often impossible [halting problem!].  Much of the
>software development process is about debugging the specification
>itself (if you even have one) and turning it into something which can
>be implemented as a useful system.  The paradox red flags are people
>saying "you'll never be able to implement this (as a useful system)"
>which is wonderfully helpful at this early stage, if they're right.
>
>In my layering scheme, paradoxes in a layer 3 logic would lead one to
>be unable to write a correct/useful layer 2 inference procedure.  I
>imagine the actual failure modes might vary in frequency and severity,
>like many other software bugs.  Certainly if you knew the logic had a
>paradox, you'd want to steer your billions of dollars per second far
>away from it, but you might still play chess against programs using it.
>
>     -- sandro

No. Again, a false analogy with familiar computer-programming situations
is being asserted here. You make it seem as if paradoxes within a
logical system are the equivalent of bugs in a software program. And we
all know that bugs are unpleasant, can cause expensive losses, should be
vigilantly guarded against, etc etc yada-yada-yada, but we also know
that in real life situations work-arounds can easily be found for most
of them and they can be isolated/ignored to some extent, because a bug
in one part of a program usually does not imply that other parts/outputs
of the program are problematic. What you are saying above is that
logical paradoxes have the same status. But they do not -- this is an
entirely different kettle of fish.

To illustrate the point, consider the famous story surrounding the
Russell paradox. Legend has it that Frege laboured for years to bring
the set theory he was promulgating to the point where he felt confident
enough to publish it in a massive tome. Russell, reading through an
advanced copy of the manuscript, discovered the paradox bearing his name
and promptly informed Frege of this. Now, did Frege react to Russell's
paradox by saying 'Oops, that is a nasty bug, but I will publish the
theory anyway with a bug warning in an appendix for hackers to work
around until I can come up with a patch.'? No. He stopped the presses
and repudiated the entire theory. That is because a logical paradox is
not like a bug in a program -- even one logical paradox in a logical
system is sufficient to bring the entire thing to a crashing halt, and
one must toss it all away. 

So your statement that 'actual failure modes might vary in frequency and
severity, like many other software bugs.  Certainly if you knew the
logic had a paradox, you'd want to steer your billions of dollars per
second far away from it, but you might still play chess against programs
using it' is extremely wrong. The actual failure modes would not vary --
they would reliably produce rubbish at every step. Every single
conclusion of the system would be worthless because both it and its
negation would be valid conclusions -- a chess playing program that told
one to move a pawn forward and to not move it, simultaneously, would not
be worthy of even being called a chess playing program. The paradox red
flags are not saying 'proceed with caution and try to avoid pitfalls'.
They are saying 'your entire system is crashing down to rubble around
you'.

-- Ziv



>

Received on Wednesday, 1 May 2002 09:13:11 UTC