Shirky article

Bijan Parsia pointed me in the direction of this anti-Semantic-Web
article by Clay Shirky:

    http://www.shirky.com/writings/semantic_syllogism.html

It's worth reading.  (Full disclosure: He praises an old paper of
mine; this is not the only reason to read it.)

The paper argues that deduction (which he calls "syllogism" for no
reason I can see) is hopelessly inadequate for realistic
applications.  I half-agree with him.  I think he underestimates the
need for deductive rules in tasks such as datatype transformations;
but it's equally true that many of the people involved in the SW are
overoptimistic about how much mileage can be gotten from deduction in
supporting things like reasoning about contractual obligations.

The perennial debate (recently revived on www-rdf-rules@w3.org) about
the need for negation-as-failure illustrates the point.  Those who
deny the need for NAF believe that somehow deductive methods will
arise that can draw conclusions of equivalent use monotonically.
(Yes, I know that one can view NAF as a simple abbreviation convention
for inferences that are really deductive, but in practice nonmonotonic
inference is a device for _escaping_ deduction.)

Consider the planning algorithms that are an important application of
OWL-S.  Are they deductive?  Some are, some aren't.  For others it's
hard to say.  The fact is that computation is a more important
category for the Semantic Web than deduction -- just as it is
everywhere else.  It is usually much easier to think about algorithms
as producing outputs than as producing conclusions.  These outputs
often achieve status as "conclusions" as a pragmatic postprocessing
phase.  E.g., a planner's output is taken as a recipe for guiding
behavior.  The agent using the planner concludes that this is the best
course of action for it to take.  It may be, but for a self-justifying
reason: the only planner the agent has couldn't come up with something
it thought was better.  Another example: Turbotax concludes that you
owe a certain amount of tax.  Is that a deductive conclusion?
Possibly.  But it doesn't produce a proof, and it would be rather
difficult to produce one.  Another: A vision program might conclude
that you are in the room.  This is clearly not a deductive conclusion.
Etc., etc.

It's annoying that Shirky indulges in the usual practice of blaming AI
for every attempt by someone to tackle a very hard problem.  The
image, I suppose, is of AI gnomes huddled in Zurich plotting the next
attempt to --- what? inflict hype on the world?  AI tantalizes people
all by itself; no gnomes are required.  Researchers in the field try
as hard as they can to work on narrow problems, with technical
definitions.  Reading papers by AI people can be a pretty boring
experience.  Nonetheless, journalists, military funding agencies, and
recently the World-Wide Web Consortium, are routinely gripped by
visions of what computers should be able to do with just a tiny
advance beyond today's technology, and off we go again.  Perhaps
Mr. Shirky has a proposal for stopping such visions from sweeping
through the population.

-- 
                                   -- Drew McDermott
                                      Yale Computer Science Department

Received on Monday, 24 November 2003 11:49:51 UTC