Re: the intersection between AIKR and COGAI

Hello,

To start with might be useful to explore 'society of mind
<http://aurellem.org/society-of-mind/index.html>' and 'soar' as point of
extension.

40 years of cognitive architecture
<https://link.springer.com/content/pdf/10.1007/s10462-018-9646-y.pdf>

Recently, Project Debater
<https://research.ibm.com/interactive/project-debater/> also came into the
scene. Although, not quite as rigorous in Cog or KR.

Thanks,

Adeel

On Fri, 28 Oct 2022 at 02:05, Paola Di Maio <paoladimaio10@gmail.com> wrote:

> Thank you all for contributing to the discussion
>
> the topic is too vast - Dave I am not worried if we aree or not agree, the
> universe is big enough
>
> To start with I am concerned whether we are talking about the same thing
> altogether. The expression human level intelligence is often used to
> describe tneural networks, but that is quite ridiculous comparison. If the
> neural network is supposed to mimic human level intelligence, then we
> should be able to ask; how many fingers do humans have?
> But this machine is not designed to answer questions, nor to have this
> level of knowledge about the human anatomy. A neural network is not AI in
> that sense
> it fetches some images and mixes them without any understanding of what
> they are
> and the process of what images it has used, why and what rationale was
> followed for the mixing is not even described, its probabilistic. go figure.
>
> Hay, I am not trying to diminish the greatness of the creative neural
> network, it is great work and it is great fun. But a) it si not an artist.
> it does not create something from scratch b) it is not intelligent really,
> honestly,. try to have a conversation with a nn
>
> This is what KR does: it helps us to understand what things are and how
> they work
> It also helps us to understand if something is passed for what it is not
> *(evaluation)
> This is is why even neural network require KR, because without it, we don
> know what it is supposed
> to do, why and how and whether it does what it is supposed to do
>
> they still have a role to play in some computation
>
> * DR Knowledge representation in neural networks is not transparent, *
>> *PDM I d say that either is lacking or is completely random*
>>
>>
>> DR Neural networks definitely capture knowledge as is evidenced by their
>> capabilities, so I would disagree with you there.
>>
>
> PDM  capturing knowledge is not knowledge representation, in AI,
> capturing knowledge is only one step, the categorization of knowledge is
> necessary to the reasoning
>
>
>
>
>
>
>> *We are used to assessing human knowledge via examinations, and I don’t
>> see why we can’t adapt this to assessing artificial minds *
>> because assessments is very expensive, with varying degrees of
>> effectiveness, require skills and a process -  may not be feasible when AI
>> is embedded to test it/evaluate it
>>
>>
>> We will develop the assessment framework as we evolve and depend upon AI
>> systems. For instance, we would want to test a vision system to see if it
>> can robustly perceive its target environment in a wide variety of
>> conditions. We aren’t there yet for the vision systems in self-driving cars!
>>
>> Where I think we agree is that a level of transparency of reasoning is
>> needed for systems that make decisions that we want to rely on.  Cognitive
>> agents should be able to explain themselves in ways that make sense to
>> their users, for instance, a self-driving car braked suddenly when it
>> perceived a child to run out from behind a parked car.  We are less
>> interested in the pixel processing involved, and more interested in whether
>> the perception is robust, i.e. the car can reliably distinguish a real
>> child from a piece of newspaper blowing across the road where the newspaper
>> is showing a picture of a child.
>>
>> It would be a huge mistake to deploy AI when the assessment framework
>> isn’t sufficiently mature.
>>
>> Best regards,
>>
>> Dave Raggett <dsr@w3.org>
>>
>>
>>
>>

Received on Friday, 28 October 2022 10:23:43 UTC