Re: definitions, problem spaces, methods

Hi Mike,

> On 7 Nov 2022, at 17:39, Mike Bergman <mike@mkbergman.com> wrote:
> When we do AI using something like GPT-3 we are making an active choice of how we will represent our knowledge to the computer. For GPT-3 and all massive data statistical models, that choice limits us to indexes.
> 
That is not true as artificial neural networks are equivalent to Turing machines in the sense of being able to do whatever computations we design them to do including the ability to store, recall and transform information in a vast variety of ways.

A more interesting question is whether vector space representations are better suited to dealing with imprecise and imperfect knowledge than conventional symbolic logic.  This is very likely to be the case for systems designed to devise their own knowledge representations as they learn from training materials. Emergent knowledge will often be far from crisp until it matures, with the need to cast aside half baked ideas in favour of ideas that fare better against the training tasks.

It has long been recognised that intuition often precedes analytical progress in mathematics, see e.g. Henri Poincaré’s "Intuition and Logic in mathematics" from 1905. It makes sense to work on techniques to mimic human intuition and System 1 thinking as complementary to deliberative, analytical System 2 thinking.  You could think of logic as the tip of a very large iceberg that is submerged below the surface of the sea.


Dave Raggett <dsr@w3.org>

Received on Tuesday, 8 November 2022 09:12:31 UTC