Re: the intersection between AIKR and COGAI

> On 27 Oct 2022, at 12:04, Paola Di Maio <paoladimaio10@gmail.com> wrote:
> 
> Thank you Dave, I hope we can address these issues during a panel discussion
> There is work to be done
> 
> DR Knowledge representation in neural networks is not transparent, 
> PDM I d say that either is lacking or is completely random

Neural networks definitely capture knowledge as is evidenced by their capabilities, so I would disagree with you there. Where we can agree, is that this representation is opaque to direct inspection as it is diffused across the weights of many connections.

> We are used to assessing human knowledge via examinations, and I don’t see why we can’t adapt this to assessing artificial minds 
> because assessments is very expensive, with varying degrees of effectiveness, require skills and a process -  may not be feasible when AI is embedded to test it/evaluate it

We will develop the assessment framework as we evolve and depend upon AI systems. For instance, we would want to test a vision system to see if it can robustly perceive its target environment in a wide variety of conditions. We aren’t there yet for the vision systems in self-driving cars!

Where I think we agree is that a level of transparency of reasoning is needed for systems that make decisions that we want to rely on.  Cognitive agents should be able to explain themselves in ways that make sense to their users, for instance, a self-driving car braked suddenly when it perceived a child to run out from behind a parked car.  We are less interested in the pixel processing involved, and more interested in whether the perception is robust, i.e. the car can reliably distinguish a real child from a piece of newspaper blowing across the road where the newspaper is showing a picture of a child.

It would be a huge mistake to deploy AI when the assessment framework isn’t sufficiently mature.

Best regards,

Dave Raggett <dsr@w3.org>

Received on Thursday, 27 October 2022 16:16:33 UTC