Just say not

Pat Hayes,

For me your "Catching the Dreams" essay [1] tells us the sorted story of why
the Semantic Web seems to have zigged into complexity when some of us though
it would just zag.  Thanks for writing it ... I'm posting this to
RDF-Interest, logic, comments, and semanticweb in the hopes that more people
will get a chance to read your essay in its entirety.

[1] http://www.aifb.uni-karlsruhe.de/~sst/is/WebOntologyLanguage/hayes.htm

But I want to ask some particular question inspired by your passages ...
  Considered as content languages, description logics
  are like logics with safety guards all over them. They
  come covered with warnings and restrictions: you
  cannot say things of this form, you cannot write rules
  like that, you cannot use arbitrary disjunctions, you
  cannot use negation freely, you cannot speak of
  classes of literals, and so on. A beginning user might
  ask, why all the restrictions? It's not as if any of these
  things are mysterious or meaningless or paradoxical,
  so why can't I be allowed to write them down on my
  web page as markup?  The answer is quite revealing:
  if we let you do that, you could write things that our
  reasoning engines might be unable to handle. As long
  as you obey our rules, we can guarantee that the
  inference engines will be able to generate the answers
  within some predetermined bounds. That is what DLs
  are for, to ensure that large-scale industrial ontologies
  can be input to inference machinery and it still be
  possible to provide a guarantee that answers will be
  found, that inferential search spaces will not explode,
  and in general that things will go well. Providing the
  guarantee is part of the game: DL's typically can be
  rigorously proven to be at least decideable, and
  preferably to be in some tractable complexity class.
... and then ..
  I think that what the semantic web needs is two
  rather different things, put together in a new way.
  It needs a content language whose sole function
  is to express, transmit and store propositions in a
  form that permits easy use by engines of one kind
  and another. There is no need to place restrictions
  or guards on this language, and it should be
  compact, easy to use, expressive and syntactically
  simple. The W3C basic standard is RDF, which
  is a good start, but nowhere near expressive
  enough. The best starting-point for such a content
  language is something like a simple version of KIF,

So what (if anything) would we sacrifice if the semantic web adopted a
language that included the basic sentential operators (and, or, not, =>,
<=>) as primitives?  Specifically what inference algorithm would become
intractable ?  Could that intractability be eliminated with a simple
assumption:  select only those facts and axioms that apply to a narrow
context prior to starting any inference process?

Could we use the test case example as per:

[2]  http://lists.w3.org/Archives/Public/www-webont-wg/2002Mar/0127.html

Somebody says:
    :page1 dc:title "ABC"
Then I want to contradict their assertion:
    :page1 (is not dc:title) "ABC"

It seems to me that DanC's way of saying that in [2] using DAML is
needlessly complicated.

Why can't I just say:
   :not_title :negates dc:title
and then
   :page1 :not_title "ABC"
where I have imported a rule for negation... perhaps coded something like in
my mentograph [3]:
(<=> (not (p A B) ) (and  (not_p A B) (:negates not_p p)))
[3] http://robustai.net/mentography/notArrow.gif

Now obviously both of those assertions cannot consistently exist in the same
context (sorry for using the 'C' word).  So, hopefully just as obviously, we
need to introduce the 'C' word in the next version of a semantic web
language.  Hmmmm ... how come I don't see the big c mentioned in [4] ?

[4]  http://www.w3.org/TR/webont-req/

What would be the real problems (if any) of this simplicity ?

Seth Russell

Received on Friday, 8 March 2002 16:49:52 UTC