Re: definitions, problem spaces, methods

> On 9 Nov 2022, at 10:41, Paola Di Maio <paola.dimaio@gmail.com> wrote:
> 
> Machine Generated KR would be nice, but it presumes machine has  correct model of the world
> Machine cannot be presumed, as things stand, to have a correct model of the world without KR

You’re ignoring product testing and liability...

The organisation developing the AI would test that it adequately serves the needs it was designed for as well as applicable regulatory standards.  I presume that when you get into a car, you’re assuming that it is safe enough to travel in, The same will apply to AI products, e.g. controlling who is deemed liable when a self-driving car is involved in an accident.

> to verify the quality/validity of this machine generated KR you would need to develop a more intelligent AI than the AI that generated the KR, and ultimately some very good systems engineers to take responsibility for the overall outcomes

Bogus.

I recommend looking at the proposed EU regulations around ethical and responsible AI. See, e.g.

 https://www.bcg.com/publications/2022/acting-responsibly-in-tight-ai-regulation-era


Dave Raggett <dsr@w3.org>

Received on Wednesday, 9 November 2022 11:01:41 UTC