Re: Why LLMs are bound to fail

> On 8 Aug 2024, at 17:49, Timothy Holborn <timothy.holborn@gmail.com> wrote:
> 
> I don't think it's safe to work on consciousness tech.  Good application development seems to get shut down, whilst the opposite appears to be the case for exploitative commodification use cases. 

I don’t agree, based upon a different conceptualisation of what it might mean for an AI system to be sentient, i.e. a system that aware of its environment, goals and performance. Such systems need to perceive their environment, remember the past and be able to reflect on how well they are doing in respect to their goals when it comes to deciding on their actions.

That is pretty concrete in respect to technical requirements. It is also safe in respect to the limitations of AI systems to grow their capabilities. Good enough AI systems won’t need huge resources as they will be sufficient for the tasks they are designed for, just as a nurse working in a hospital doesn’t need Ph.D level knowledge of biochemistry.

Dave Raggett <dsr@w3.org>

Received on Thursday, 8 August 2024 17:09:13 UTC