Re: The future of KR in retrospective

Thank you for that article, Paola, it was very nice reading over first coffee.

Brachman notes, in the "Nagging Doubts" section, that connectionist models 
might "eventually take over the role now being played by traditional KR 
systems... but the jury is still out". Mainly because of technical advances
in multiplying matrices together quickly in the last few years, we now have
neural networks doing things that were traditionally part of KR. (what is
ontology building if not a giant classification exercise?) In the 1980s and
1990s it was simply impractical to do neural networks on any useful scale,
but now we can.

The explicitness of symbolic logic-based representation has always been, to
me, their most attractive feature. This gives a satisfying explanatory character
to those kinds of systems. Ask "what?" or "why?" and you can point to some
statement or step through a proof and get the feeling of "understanding".

Explicit representation of knowledge is almost entirely absent in connectionist
systems. But they work, and they echo the underlying biology. A child doesn't
learn by being fed a bunch of facts and rules, a child learns by example and
a trial and error feedback loop. First comes filtering out what is relevant and
what is not relevant. Any kind of explicit reasoning comes later and never 
seems to stand on its own (this might be why mathematicians continue to speak
of intuition both for finding and for understanding formal proofs).

What is the relationship between what seems to be an underlying connectionist
architecture and the explicit reasoning that seems to float on top of it?
This is a burning question as more and more real-world decisions are made
with the help of artificial neural networks but without giving the kind of
explanation or insight that logic is good at providing.

Brachman does mention "hybrid reasoning systems" but the conception seems
more modular, consisting of specialised, domain-specific subsystems. Within
the set of systems that are logic-based, that seems very sensible. Maybe
the whole RDF programme is one such subsystem, and problems arise when it
tries to be more general than it is. But the relationship between connectionist
systems and logic-systems is not this kind of division of labour. They seem
fundamentally different.

Here is a wild conjecture. The relationship between connectionist models and
logic models is roughly analogous to the relationship between discrete 
and continuous formulations of problems in mathematics and physics. If that
is the case then the relationship should be described with a limit of some
kind. In situations where logic falls down, where the reasons seem vague
and ill-defined, this limit argument does not hold, we are not in continuous
territory. When neural networks seem to lack explanatory power, it's because
we are looking too closely at the details and don't have the approximate
but clear and sharp picture given by logic.

Off to work now.

Best wishes,

William Waites | wwaites@inf.ed.ac.uk
Laboratory for Foundations of Computer Science
School of Informatics, University of Edinburgh

-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

Received on Thursday, 27 June 2019 09:11:04 UTC