- From: Dave Raggett <dsr@w3.org>
- Date: Fri, 22 Sep 2023 09:50:32 +0100
- To: Paola Di Maio <paoladimaio10@gmail.com>
- Cc: W3C AIKR CG <public-aikr@w3.org>
Received on Friday, 22 September 2023 08:50:47 UTC
Just to clarify the following: > On 20 Sep 2023, at 16:03, Dave Raggett <dsr@w3.org> wrote: > > …. > I expect that this will be much easier as we enable neural networks to support continual learning and reflective cognition, which are impractical for current systems. That entails some level of sentience. By sentience, I am implying that the AI system can utilise models of its past, present and potential futures, along with models of its goals, performance, likes, dislikes etc. as well as those of its users. Such likes and dislikes correspond to the values we expect from generative AI agents, e.g. we don’t them to give racist, sexist and inflammatory responses, including instructions for making bombs, etc. We also want AI agents to be unambiguously artificial agents, and not to be confused with humans. Dave Raggett <dsr@w3.org>
Received on Friday, 22 September 2023 08:50:47 UTC