W3C home > Mailing lists > Public > www-rdf-logic@w3.org > April 2001

RE: A plea for peace. was: RE: DAML+OIL (March 2001) released: a correction

From: Danny Ayers <danny@panlanka.net>
Date: Tue, 17 Apr 2001 12:25:38 +0600
To: "Aaron Swartz" <aswartz@swartzfam.com>, "pat hayes" <phayes@ai.uwf.edu>
Cc: "RDF Logic" <www-rdf-logic@w3.org>
<- Machines can communicate all they want and do a ton of stuff
<- based on that
<- communication -- I agree, and I find it quite useful. However, I
<- have never
<- seen a machine invent a concept or understand a concept on its
<- own. Instead,
<- it follows a continuous process of "bootstrapping" -- pulling
<- itself up from
<- the terms with which it was programmed with up to terms which hadn't been
<- invented when it was turned on.
<- Still, we must draw a distinction between the fact that it can
<- process these
<- terms (which it can undoubtedly can, as it can do all sorts of
<- processing-type things as you suggest above) and the fact that it can
<- *understand* these terms, for understanding is a difficult subject which
<- people cannot even agree on, but to which no machine has even really come
<- close.

I think these last two paragraphs highlight important areas - though I'd
suggest a machine can come up with new stuff, based on the bootstrapping and
the material it has to work with. How 'conceptual' that is depends on how
'conceptual' the information the machine is working on happens to be - if
there's a good mapping between the concept space and the model on the
machine, I see no reason why the machine shouldn't come up with new
There is little to be gained talking of machine 'understanding' at this
point - I can join pieces of wood together using a hammer and nails and make
something useful (a shed would be nice) without understanding the physics
involved in bashing a nail in.

<- >> I know of no language that conveys "knowledge" to machines.
<- > Read an AI textbook on 'knowledge representation' to come up
<- to speed on this.

I presume this is a terminology issue.

<- I draw a distinction between information and knowledge that you
<- don't seem
<- to follow. I've heard about a lot of stuff in AI, and it's all
<- fascinating
<- ideas, but I'm still waiting to see it in practice. Certainly if
<- my computer
<- was able to comprehend all this knowledge, I think we should be in a much
<- different place than we are now.

A hierachy that has been useful is :


the kind of metadata we are talking about with RDF would be somewhere around
the Information - Knowledge level.

<- >> If I'm missing something, please tell me what this powerful
<- language is. I'd
<- >> love to get my hands on it.
<- > It is broadly called 'formal logic', but it comes in many flavors.
<- And formal logic was created without human language? I doubt so. How was
<- formal logic explained to you? Not using formal logic, I'd suggest.

I think this may narrow the terms of reference unnecessarily. For instance
if I want predict the behaviour of a complex system, I could set up a
genetic algorithm to work on joining together a whole bunch of polynomials.
Somewhere down the line I might get a system that accurately models the
behaviour of the first system. Surely what the evolved system had would be a
form of knowledge about the target system. I know it could be argued that
underneath all this there's the formal logic involved in the sums etc, but
there is no need to go there to get results.

Things like kr & logic-based reasoning have a long, long history prior to
the web, and just because these happen to be relatively straightforward to
apply in the web/metadata space, and can appear shiny and new, we shouldn't
forget the cousins of these paradigms, such as the genetic algorithm,
cellular automata and neural nets. Ok, so they haven't been invited to the
wedding of knowledge and the web, but they might have a lot to say at the
Received on Tuesday, 17 April 2001 02:29:35 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 March 2016 11:10:34 UTC