Re: class/subclass and the dominance of object-oriented programming

This reminds me of work in the 70’s on spreading activation in semantic memory and the time people take to answer different questions. See for example:

“A Spreading-Activation Theory of Semantic Processing, Collins and Loftus”, 1975
https://pdfs.semanticscholar.org/6137/4d14a581b03af7e4fe0342a722ea94911490.pdf <https://pdfs.semanticscholar.org/6137/4d14a581b03af7e4fe0342a722ea94911490.pdf>
The basic idea is that concepts are arranged in a taxonomy and it takes time to follow paths across the graph of related concepts. For instance, we know that a mallard is a kind of duck and a duck is a kind of bird and a bird is a kind of animal. Likewise we know that a flying fox is a kind of bat and a bat is a kind of mammal and a mammal is a kind of animal.

How long do people take to answer questions such as “is mallard a bird?”, “is a duck an eagle”,  or “is a flying fox a bird?”.  Such experiments found that people have an idea of mutually exclusive sets, e.g. birds are disjoint from mammals. RDF allows you to use owl:disjointWith to express this, although not in OWL Lite, see:

 https://www.w3.org/TR/owl-ref/#disjointWith-def <https://www.w3.org/TR/owl-ref/#disjointWith-def>

So from a cognitive perspective taxonomic reasoning is something you can design experiments to test different theories. What I am less sure of is whether work has been done that distinguishes between conscious taxonomic reasoning, and unconscious taxonomic reasoning. The former involves the sequential application of rules by the basal ganglia, whilst the latter involves spreading activation through the cortex as a graph algorithm that is executed in parallel.

It gets even more fun when people are thinking about different modalities, e.g. the colour, taste, feel, sound, shape, size, and emotional associations for things. The assumption to be tested is whether knowledge about different modalities is stored in different cortical regions, and if so, how can these different modalities be combined efficiently in respect to a functional model involving inter-region messaging and graph algorithms that span multiple cortical regions.

I realise that that is some distance from programming languages, but on the other hand it is central to the architectural choices for Cognitive AI. I am therefore looking for ideas for scenarios to design demos for as a basis for testing ideas practically.

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Tuesday, 30 June 2020 13:35:43 UTC