- From: Ronald Reck <rreck@rrecktek.com>
- Date: Thu, 08 Aug 2024 16:36:43 -0400 (EDT)
- To: "Dave Raggett" <dsr@w3.org>
- CC: "Timothy Holborn" <timothy.holborn@gmail.com>, "Milton Ponson" <rwiciamsd@gmail.com>, "W3C AIKR CG" <public-aikr@w3.org>, "public-cogai" <public-cogai@w3.org>
+1 On Thu, 8 Aug 2024 18:08:59 +0100, Dave Raggett <dsr@w3.org> wrote: > > > On 8 Aug 2024, at 17:49, Timothy Holborn <timothy.holborn@gmail.com> wrote: > > > > I don't think it's safe to work on consciousness tech. Good application development seems to get shut down, whilst the opposite appears to be the case for exploitative commodification use cases. > > I don’t agree, based upon a different conceptualisation of what it might mean for an AI system to be sentient, i.e. a system that aware of its environment, goals and performance. Such systems need to perceive their environment, remember the past and be able to reflect on how well they are doing in respect to their goals when it comes to deciding on their actions. > > That is pretty concrete in respect to technical requirements. It is also safe in respect to the limitations of AI systems to grow their capabilities. Good enough AI systems won’t need huge resources as they will be sufficient for the tasks they are designed for, just as a nurse working in a hospital doesn’t need Ph.D level knowledge of biochemistry. > > Dave Raggett <dsr@w3.org> Ronald P. Reck http://www.rrecktek.com - http://www.ronaldreck.com
Received on Thursday, 8 August 2024 20:36:49 UTC