- From: Owen Ambur <owen.ambur@verizon.net>
- Date: Thu, 8 Aug 2024 20:33:02 +0000 (UTC)
- To: Timothy Holborn <timothy.holborn@gmail.com>, Dave Raggett <dsr@w3.org>
- Cc: Milton Ponson <rwiciamsd@gmail.com>, W3C AIKR CG <public-aikr@w3.org>, public-cogai <public-cogai@w3.org>
- Message-ID: <496500566.5571296.1723149182680@mail.yahoo.com>
I'm with Dave on this score. What I'd add is that we human beings should: a) help Augumented Intelligence (AI) agents do a better job of helping us achieve our objectives by rendering our plans in an open, standard, machine-readbable format like StratML, and b) expect them to return the favor by publishing their results in such a format, thereby enabling a virtuous cycle of ever-improving prerformance. From my perspective, failure to do as much is an example of artificial ignorance, and if we tolerate it, we'll have no one to blame by ourselves. I also agree that "good enough" systems won't need huge resources, and to minimize such waste, it will be good if politics and goverment can be kept out of the process to the greatest degree possible. Here's what ChatGPT has had to say about that https://www.linkedin.com/pulse/ai-politics-free-life-owen-ambur-fvs8e/ See also https://www.linkedin.com/pulse/consciously-connected-communities-owen-ambur and perhaps https://connectedcommunity.net/ & https://search.aboutthem.info/ as well. Owen Amburhttps://www.linkedin.com/in/owenambur/ On Thursday, August 8, 2024 at 01:09:21 PM EDT, Dave Raggett <dsr@w3.org> wrote: On 8 Aug 2024, at 17:49, Timothy Holborn <timothy.holborn@gmail.com> wrote: I don't think it's safe to work on consciousness tech. Good application development seems to get shut down, whilst the opposite appears to be the case for exploitative commodification use cases. I don’t agree, based upon a different conceptualisation of what it might mean for an AI system to be sentient, i.e. a system that aware of its environment, goals and performance. Such systems need to perceive their environment, remember the past and be able to reflect on how well they are doing in respect to their goals when it comes to deciding on their actions. That is pretty concrete in respect to technical requirements. It is also safe in respect to the limitations of AI systems to grow their capabilities. Good enough AI systems won’t need huge resources as they will be sufficient for the tasks they are designed for, just as a nurse working in a hospital doesn’t need Ph.D level knowledge of biochemistry. Dave Raggett <dsr@w3.org>
Received on Thursday, 8 August 2024 20:33:08 UTC