Re: W3C Cognitive AI - Re: COG ai definition?

> are you saying chunks are neurosymbolic representations?

Yes, at least as an approximation, and this addresses the question of how the brain can represent graphs as data, as is needed to represent phrase structure in natural language, semantic networks for models of concepts, properties and relationships, and for spatial relationships that allow you to navigate around your home or your town.

At first glance, it is hard to see how graph data can be represented in terms of the patterns of firing across the cortex. If you consider firing patterns across a bundle of nerve fibres, you can think of it in terms of a vector in an n-dimensional space, where each element of the vector corresponds to the firing rate for a particular nerve fibre. You can then encode a chunk as a vector where the projection of the vector onto orthogonal axes is used to represent the chunk property values.  That involves using numeric values in place of strings for property names and values. An implementation could use a mapping table to map from names to numbers and vice versa.

From a mathematical perspective, the challenge is how to make that work effectively in the presence of noise. An effective solution is to use circular convolution. The details are given by Chris Eliasmith in:

 https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0149928

His TED talk gives an overview of his approach:

 https://www.youtube.com/watch?v=g2HHJfovb5E

John R. Anderson reports that the cortico-basal ganglia circuit takes approximately 50 milliseconds for production rules to fire. He further claims that rule conditions are matched in the Striatum to identify candidate rules. The Pallidum then selects one of these rules for execution by the Thalamus. His claims are supported by experiments that match ACT-R models to data from fMRI brain scans of subjects performing specific tasks, e.g. relating to simple algebra. For more details, see:

 http://act-r.psy.cmu.edu/about/

Neurons can also be used to respond to the relative timings of single pulses, e.g. when a fly changes direction on a close encounter with another fly.  High speed cameras show that flies can completely change direction by rolling their bodies and a slight flick of their wings within five milliseconds. 

Best regards,
Dave

> On 22 Sep 2020, at 03:02, Paola Di Maio <paoladimaio10@gmail.com> wrote:
> 
> Dave and all
> please let this answer not be lost! If you post it somewhere meaningful, it can be referenced
> for the future, perhaps with sources/references
> are you saying chunks are neurosymbolic representations?
> 
> 
> On Mon, Sep 21, 2020 at 11:32 PM Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> wrote:
> Hi Amélie,
> 
> Thanks for the pointer.  Chunks are a data structure for representing n-ary relationships, and are a common approach in Cognitive Science, although the precise details may vary across projects. Each chunk is a collection of properties that reference other chunks. Whilst a minimalist approach to chunks limits property values to chunk identifiers, it makes sense to look at more flexible approach. This includes support for a small set of data types for literal values, e.g. numbers, booleans, strings and dates, as well as to allow properties to have a list of values rather than being restricted to a single value. That simplifies the authoring of chunks and rules, although it doesn’t change the expressive power.
> 
> Chunk rules are condition-action rules expressed in terms of chunks.  A rule has a conjunction of one or more conditions that are matched against the buffers for cognitive modules (i.e. cortical regions). Each module has a single buffer that can hold a single chunk.  This follows John Anderson’s work on ACT-R, and from a neuroscience perspective, the buffer state represents the concurrent firing pattern of a bundle of nerve fibres connecting to particular cortical regions. See Chris Eliasmith’s work on simulating pulsed neural networks and his concept of “semantic pointers”.  This also relates to David Marr’s three levels of analysis: computational, algorithmic/representational, and implementational. The functional requirements at the computational are phenomenological and essentially independent of the implementation layer.
> 
> One major difference of Chunks from Property Graphs and RDF is the combination of symbolic information (graph data) with sub-symbolic information (statistics). This is needed to mimic human memory and reasoning. It can also be related to Web search engines which track the graph formed by hypertext links and ranks pages in ways that model the expected relevance of a given page to a user’s query.   In a large cognitive database, memory recall should return the most relevant matches, unlike conventional databases which return all matches.
> 
> Another distinction is between formal semantics and operational semantics. RDF and OWL are founded on description logics and part of the Aristotelian tradition in which formal rules are used to deduce the logical implications of the assumed facts.  Despite its mathematical appeal, a reductionist approach to formal semantics isn’t a good fit to the everyday semantics of human language, where concepts are informal and context dependent, as well as subject to a lack of certainty and completeness. The meaning of words are defined in dictionaries in terms of other words and common usage patterns. Moreover, as Philip Johnson-Laird has shown in his work on mental modals and human reasoning, we don’t rely on logic and probability, but rather by thinking about what is possible.
> 
> Chunks and Property Graphs have in common that they rely on operational semantics in terms of the relationship between perception (data input), reasoning and actuation (data output).  Industry is showing a rapid uptake for Property Graphs as compared to RDF, so operational semantics are clearly good enough for many business needs, and allegedly easier to work with compared to formal approaches. One example that may help clarify this is the statement “most people like ice cream”.  This is beyond first order logic, as it involves counting, statistics, and fuzzy concepts such as “like”.  Despite that, it can be readily expressed as a simple graph, and used with rules and graph algorithms for reasoning.
> 
> Sorry if this was too long an answer!  :-)
> 
> Best regards,
> Dave
> 
>> On 21 Sep 2020, at 14:04, Amélie Gyrard <amelie.gyrard@trialog.com <mailto:amelie.gyrard@trialog.com>> wrote:
>> 
>> Hello,
>> Regarding the definition of Cognitive AI, this book can help:
>> Artificial Cognitive Systems – A Primer [Vernon 2015] <https://www.amazon.fr/Artificial-Cognitive-Systems-David-Vernon/dp/0262028387> 
>> https://www.amazon.fr/Artificial-Cognitive-Systems-David-Vernon/dp/0262028387 <https://www.amazon.fr/Artificial-Cognitive-Systems-David-Vernon/dp/0262028387>
>> If we are are not agree with those ones, we can compare the different definitions and common keyphrases/terms.
>> 
>>  I remember it explained the chunks as well.  
>> To me, chunks are similar to rules (from rule-based systems)<image.png>
>> 
>> 
>> 
>> Le lun. 14 sept. 2020 à 08:01, Paola Di Maio <paola.dimaio@gmail.com <mailto:paola..dimaio@gmail.com>> a écrit :
>> Ron and all
>> 
>> - since we are educating ourselves :-) - 
>> I wonder if someone may know where the definition COG AI comes from
>> 
>> I first started studying AI around the nineties, and got an MSC in 2000, but we never used this term
>> we used KBS (knowledge based systems)
>> 
>> here it says cognitive computing came about in 2014
>> ttps://cognitivecomputingconsortium.com/definition-of-cognitive-computing/ <http://cognitivecomputingconsortium.com/definition-of-cognitive-computing/>
>> 
>> thank you!
>> 
>> 
>> On Mon, Sep 14, 2020 at 11:04 AM Paola Di Maio <paola.dimaio@gmail.com <mailto:paola.dimaio@gmail.com>> wrote:
>> Thank you Ronald for setting this up
>> I should be able to make it
>> 
>> For me, AI has always been cognitive AI - probably because I started learning AI
>> from knowledge based systems (long ago), I never felt the necessity to call AI cognitive
>> (i understand that given the spike of ML this disambiguation may be useful now)
>> at the same time, I have been practicing all along for thirty years (unlabelled, and unaware perhaps
>> that a discipline was forming )
>> 
>> My suggestion is to try make the call a bit participatory, make sure that whoever is on the call
>> can contribute to the call agenda and bring in their perspective/experience to whatever is the agenda goal
>> 
>> Its good to learn but  to "éducate'' sounds as if people dont know about cogAI already, like a bit patronizing perhaps?
>> what about co-learn :-)  
>> 
>> I am a constructivist by nature
>> 
>> P
>> 
>> 
>> 
>> On Mon, Sep 14, 2020 at 2:38 AM Ronald Reck <rreck@rrecktek.com <mailto:rreck@rrecktek.com>> wrote:
>> Hello Cognitive AI Community group,
>> 
>> Our first conference call is scheduled for
>> September 21, 2020 at 1 PM London time.
>> Contact information will be sent out later this week.
>> 
>> The agenda is as follows:
>> 
>> 1. Educate - gentle introduction to the topic of cog-ai
>> 
>> 2. Outreach - Discuss how to extend reach out beyond
>> our current group. We seek to bridge the technical
>> clique mindsets as the topic is interdisciplinary.
>> It involves traditional AI (deep learning),
>> natural language processing, logic, pragmatics,
>> cognitive science, and semantic web.
>> 
>> 3. Use cases - Understand and document business cases
>> especially around machine & human collaboration. This hopes
>> to drive funding.
>> 
>> 4. AI ethics / explainability
>> 
>> As we are still in the early stages, there is
>> much exciting work to be done, we need to consider
>> how to involve different orientations to incubate
>> a paradigm shift so that future intellectual effort
>> is exerted in the most effectively toward AI's ability 
>> to enhance society.
>> 
>> Please feel free to comment or make suggestions!
>> 
>> -Ronald P. Reck
>> 
>> 
>> 
>> 
>> -- 
>>  Amelie GYRARD
>> TRIALOG SAS, 25 rue du Général Foy, 75008 PARIS
>> ✆ +33 1 44 70 61 25
>> ✉ amelie.gyrard@trialog.com <mailto:amelie.gyrard@trialog.com>
>> www.trialog.com <http://www.trialog.com/>
> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> http://www.w3.org/People/Raggett <http://www.w3.org/People/Raggett>
> W3C Data Activity Lead & W3C champion for the Web of things 
> 
> 
> 
> 

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Tuesday, 22 September 2020 09:17:54 UTC