- From: Antoine Zimmermann <antoine.zimmermann@emse.fr>
- Date: Tue, 30 Jun 2020 16:02:34 +0200
- To: public-aikr@w3.org
Le 30/06/2020 à 15:35, Dave Raggett a écrit : > This reminds me of work in the 70’s on spreading activation in semantic > memory and the time people take to answer different questions. See for > example: > > “A Spreading-Activation Theory of Semantic Processing, Collins and > Loftus”, 1975 > https://pdfs.semanticscholar.org/6137/4d14a581b03af7e4fe0342a722ea94911490.pdf > > The basic idea is that concepts are arranged in a taxonomy and it takes > time to follow paths across the graph of related concepts. For instance, > we know that a mallard is a kind of duck and a duck is a kind of bird > and a bird is a kind of animal. Likewise we know that a flying fox is a > kind of bat and a bat is a kind of mammal and a mammal is a kind of animal. > > How long do people take to answer questions such as “is mallard a > bird?”, “is a duck an eagle”, or “is a flying fox a bird?”. Such > experiments found that people have an idea of mutually exclusive sets, > e.g. birds are disjoint from mammals. RDF allows you to use > owl:disjointWith to express this, although not in OWL Lite, see: > > https://www.w3.org/TR/owl-ref/#disjointWith-def Not in OWL Lite but fortunately in OWL DL, OWL 2 DL, OWL 2 EL, OWL 2 QL, OWL 2 RL, OWL 1/2 Full, the famous Ter Horst's pD*, and the not so famous OWL LD. OWL Lite is surprisingly difficult to reason with while it is surprisingly poor in its class constructs. That's why one should only talk about the more recent fragments of OWL 2, or other variants that make computational sense. When someone mentions OWL Lite, I can't resist bashing it :) --AZ > So from a cognitive perspective taxonomic reasoning is something you can > design experiments to test different theories. What I am less sure of is > whether work has been done that distinguishes between conscious > taxonomic reasoning, and unconscious taxonomic reasoning. The former > involves the sequential application of rules by the basal ganglia, > whilst the latter involves spreading activation through the cortex as a > graph algorithm that is executed in parallel. > > It gets even more fun when people are thinking about different > modalities, e.g. the colour, taste, feel, sound, shape, size, and > emotional associations for things. The assumption to be tested is > whether knowledge about different modalities is stored in different > cortical regions, and if so, how can these different modalities be > combined efficiently in respect to a functional model involving > inter-region messaging and graph algorithms that span multiple cortical > regions. > > I realise that that is some distance from programming languages, but on > the other hand it is central to the architectural choices for Cognitive > AI. I am therefore looking for ideas for scenarios to design demos for > as a basis for testing ideas practically. > > Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> > http://www.w3.org/People/Raggett > W3C Data Activity Lead & W3C champion for the Web of things > > > > -- Antoine Zimmermann Institut Henri Fayol École des Mines de Saint-Étienne 158 cours Fauriel CS 62362 42023 Saint-Étienne Cedex 2 France Tél:+33(0)4 77 42 66 03 Fax:+33(0)4 77 42 66 66 http://www.emse.fr/~zimmermann/ Member of team Connected Intelligence, Laboratoire Hubert Curien
Received on Tuesday, 30 June 2020 14:02:48 UTC