W3C home > Mailing lists > Public > www-rdf-logic@w3.org > May 2001

RE: What do the ontologists want

From: Wilson, MD (Michael) <M.D.Wilson@rl.ac.uk>
Date: Fri, 18 May 2001 13:41:22 +0100
Message-ID: <350DC7048372D31197F200902773DF4CFDA732@exchange11.rl.ac.uk>
To: "'Dan Brickley'" <danbri@w3.org>, Dan Connolly <connolly@w3.org>
Cc: Bill Andersen <andersen@ontologyworks.com>, pat hayes <phayes@ai.uwf.edu>, Ziv Hellman <ziv@unicorn.com>, www-rdf-logic@w3.org
In the early/mid 1990's the hypertext research community looked down on the
web as having little innovation, but only taking established techniques from
their research world. Doing something understood, but on a more global
scale.

In the semantic web W3C must do the same thing. Take the KR and inference
mechanisms that are understood and accepted from AI/logic programming and
apply them on the global scale.
If RDF or the Semantic Web start to demand innovative research developments
in KR or inferencing, then there are the associated research risks. It is
not appropriate for a W3C enterprise to take on those risks - the Semantic
web should be treated by the KR and inference mechanism community with the
same contempt that the hypertext research community treated the web in the
early 1990's - nothing really new, just more global and robustly defined.

There were divisions in the hypertext community about alternative approaches
which that community well understood in 1990. There are divisions in the KR
and inference mechanism community (e.g. neat vs scruffy) which that
community well understands.

W3C must respect the KR and inference mechanism community, understand those
strong divisions and facilitate a standardisation of representation and
inference to achieve the requirements in the Semantic Web Activity
Statement. If experienced guys like Pat Hayes, and Drew McDermott see the
Semantic Web as taking a course which is of detailed research interest (e.g.
promiscuous reification without nested quantification) then W3C is doing
something wrong.

There is a further sensitivity, that industry is intended to adopt the
Semantic Web products. The industrial world had no experience of distributed
hypertext in the early 1990s. There are many industrial managers who have
memories of the 1980s expert system bandwagon when the lost a lot of money
investing in technologies without a clear view of business benefits and the
limits of those technologies. If the semantic web looks like a repeat of
that, they will immediately ignore it. Again, W3C must be very clear about
the exact purpose of the technologies it facilitates, and their robustness.

Prof Michael Wilson
Chair, W3C Office in the UK
Information Technology Department                tel: +44 (0)1235 44 6619
CLRC Rutherford Appleton Laboratory             fax: +44(0)1235 44 5597
Chilton, DIDCOT, Oxon, OX11 0QX, UK             

WWW: http://www.itd.clrc.ac.uk/Person/M.D.Wilson

The contents of this email are sent in confidence for the use of the
intended recipients only.  If you are not one of the intended recipients
do not take action on it or show it to anyone else, but return this
email to the sender and delete your copy of it




-----Original Message-----
From: Dan Brickley [mailto:danbri@w3.org]
Sent: 17 May 2001 15:50
To: Dan Connolly
Cc: Bill Andersen; pat hayes; Ziv Hellman; www-rdf-logic@w3.org
Subject: Re: What do the ontologists want





On Thu, 17 May 2001, Dan Connolly wrote:
[...]
> But that sounds an awful lot like what folks were saying
> about global hypertext in 1991.

Sorry Dan, but I just don't buy into this "we showed them then, we'll show
them again" bravado, which is also evident in the recent (otherwise very
useful) Scientific American piece:

http://www.scientificamerican.com/2001/0501issue/0501berners-lee.html [[
	[...]
	Knowledge representation, as this technology
	 is often called, is currently in a state
	comparable to that of hypertext before the advent of the Web: it is
	clearly a good idea, and some very nice demonstrations
	exist, but it has not yet changed the world.
	]]

The history of embedding machine-processable references in
electronic documents (hypertext) is short, interesting, and really
very different to the long and well documented history of knowledge
representation. Go read http://classics.mit.edu/Aristotle/categories.html
and tell me that the year 2001 is to KR as the year 1989 was to
electronic hypertext.

OK, so the WWW successfully strips down some ideas floating in around
in the hypertext community, and successfully applied them on the Internet.
We all gained a lot from this. But I see no reason whatsoever for this to
bolster out confidence that the same trick can be played with KR. It's an
entirely  different kettle of fish... What next? WWW-Physics,
WWW-Chemistry, where we apply our "simplify it so it scales" methodology
to some other disciplines previously bedevilled by unnecessary complexity?
What makes WWW-Logic special?

We need to treat a complex and ancient field with the respect it deserves;
to draw an analogy between KR and the (really very handy) ability to shove
hyperlinks in structured textfiles is just crass.

Dan

--
mailto:danbri@w3.org
http://www.w3.org/People/DanBri/
Received on Friday, 18 May 2001 08:41:31 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:45:37 UTC