W3C home > Mailing lists > Public > www-rdf-logic@w3.org > April 2002

Re: Just say not

From: Pat Hayes <phayes@ai.uwf.edu>
Date: Mon, 1 Apr 2002 10:39:46 -0800
Message-Id: <p05101509b8ce54d677a7@[65.217.30.48]>
To: "Seth Russell" <seth@robustai.net>
Cc: "RDF-LOGIC" <www-rdf-logic@w3.org>
>Pat Hayes,
>
>For me your "Catching the Dreams" essay [1] tells us the sorted story of why
>the Semantic Web seems to have zigged into complexity when some of us though
>it would just zag.  Thanks for writing it ... I'm posting this to
>RDF-Interest, logic, comments, and semanticweb in the hopes that more people
>will get a chance to read your essay in its entirety.
>
>[1] http://www.aifb.uni-karlsruhe.de/~sst/is/WebOntologyLanguage/hayes.htm
>
>But I want to ask some particular question inspired by your passages ...
>[[
>   Considered as content languages, description logics
>   are like logics with safety guards all over them. They
>   come covered with warnings and restrictions: you
>   cannot say things of this form, you cannot write rules
>   like that, you cannot use arbitrary disjunctions, you
>   cannot use negation freely, you cannot speak of
>   classes of literals, and so on. A beginning user might
>   ask, why all the restrictions? It's not as if any of these
>   things are mysterious or meaningless or paradoxical,
>   so why can't I be allowed to write them down on my
>   web page as markup?  The answer is quite revealing:
>   if we let you do that, you could write things that our
>   reasoning engines might be unable to handle. As long
>   as you obey our rules, we can guarantee that the
>   inference engines will be able to generate the answers
>   within some predetermined bounds. That is what DLs
>   are for, to ensure that large-scale industrial ontologies
>   can be input to inference machinery and it still be
>   possible to provide a guarantee that answers will be
>   found, that inferential search spaces will not explode,
>   and in general that things will go well. Providing the
>   guarantee is part of the game: DL's typically can be
>   rigorously proven to be at least decideable, and
>   preferably to be in some tractable complexity class.
>]]
>... and then ..
>[[
>   I think that what the semantic web needs is two
>   rather different things, put together in a new way.
>   It needs a content language whose sole function
>   is to express, transmit and store propositions in a
>   form that permits easy use by engines of one kind
>   and another. There is no need to place restrictions
>   or guards on this language, and it should be
>   compact, easy to use, expressive and syntactically
>   simple. The W3C basic standard is RDF, which
>   is a good start, but nowhere near expressive
>   enough. The best starting-point for such a content
>   language is something like a simple version of KIF,
>   ..
>]]
>
>So what (if anything) would we sacrifice if the semantic web adopted a
>language that included the basic sentential operators (and, or, not, =>,
><=>) as primitives?  Specifically what inference algorithm would become
>intractable ?

Oh, a whole lot of them. All the description-logic inference systems 
based on connected tableaux would immediately break, for example.

Actually it depends on whether those connectives can be used freely. 
If you allow negation *outside* a quantifier, for example, then you 
effectively have mixed quantifiers and then you need to invoke 
something like skolemization in your inference system, which 
immediately puts you outside any polynomial complexity class and 
probably outside decidability.

>  Could that intractability be eliminated with a simple
>assumption:  select only those facts and axioms that apply to a narrow
>context prior to starting any inference process?

Well, yes and no. Theoretically no, but in practice, most of the 
time, I would guess yes.

Heres an analogy I use when teaching. Think of complexity as height, 
and think of the inference problems (ie pairs of <set of assumptions, 
query to be proved>) as points on a landscape. A complexity class is 
like a territory, and it is measured by its highest point.  DLs are 
like territories that have been surveyed and are *guaranteed* to have 
no high peaks in them.  But some 'high' territories are like large 
plains with high mountain ranges at one edge (or sometimes in the 
middle); they have large numbers of very tractable problems, with a 
few very hard problems mixed in with them. So for example for purely 
propositional complexity, you can take conjunctive normal form and 
just count the number of propositions and the number of clauses, and 
compute the ratio. If it is small, the set is almost certainly 
unsatisfiable, and you can almost certainly detect this quite 
quickly. If it is large, it is almost certainly satisfiable, and 
ditto. There is a middle area where its hard to tell, and you might 
spend an awful lot of time trying to prove it one way or the other. 
But if the SW hardly ever goes there, then why bother putting in 
special constraints to protect it from the consequences of it did?

>
>Could we use the test case example as per:
>
>[2]  http://lists.w3.org/Archives/Public/www-webont-wg/2002Mar/0127.html
>
>Somebody says:
>     :page1 dc:title "ABC"
>Then I want to contradict their assertion:
>     :page1 (is not dc:title) "ABC"
>
>It seems to me that DanC's way of saying that in [2] using DAML is
>needlessly complicated.

I agree. Its often very efficient to just say not.

>
>Why can't I just say:
>    :not_title :negates dc:title
>and then
>    :page1 :not_title "ABC"
>where I have imported a rule for negation... perhaps coded something like in
>my mentograph

Well, I'd suggest just using negation as a primitive in the syntax. 
Its not like it is anything mysterious.

>  [3]:
>(<=> (not (p A B) ) (and  (not_p A B) (:negates not_p p)))
>[3] http://robustai.net/mentography/notArrow.gif
>
>Now obviously both of those assertions cannot consistently exist in the same
>context (sorry for using the 'C' word).

And you should be. No need to use it: this sense of 'context' is just 
'set of assertions' (RDF graph, ontology, axiom set, Kbase,....) , so 
why not use a precise word instead of an imprecise one? Ironically, 
the word 'context' is probably the most contextually-sensitive word 
in our technical vocabulary, right up there with 'system'. I once 
started to count the meanings it has, and gave up somewhere in the 
30's.

>So, hopefully just as obviously, we
>need to introduce the 'C' word in the next version of a semantic web
>language.

Not nearly so obvious to me. Negation is obviously immediately 
useful, context far less obviously so.

>Hmmmm ... how come I don't see the big c mentioned in [4] ?
>
>[4]  http://www.w3.org/TR/webont-req/
>
>What would be the real problems (if any) of this simplicity ?

Beats me. Maybe people could write simple (inefficient, but simpler) 
reasoners more easily?

Pat


-- 
---------------------------------------------------------------------
IHMC					(850)434 8903   home
40 South Alcaniz St.			(850)202 4416   office
Pensacola,  FL 32501			(850)202 4440   fax
phayes@ai.uwf.edu 
http://www.coginst.uwf.edu/~phayes
Received on Monday, 1 April 2002 13:39:29 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:52:42 GMT