Re: Why LLMs are bound to fail

IMHO, large learning models, is different to consciousness sciences. In
many, massive ways.

Fwiw, got a workstation that I'm still setting up.  Has 2 X a4500, 2 X
a4000 (72GB vRAM total), dual xeon 4214r, 256gb ram (ATM, might upgrade it
a bit). Purpose of it, is to be able to do LLM related work.

But that's different to consciousness work.

I'm hopeful to create AI art, perhaps entire world's, from the text of a
book, or similar.

But building matrixs, is different to mitigating - other risks associated
to artificial insanity, at a personal, social, economic or international
level.  Building world's for environments where people prefer fictions,
seems both, easier and in many ways also kinda better, given contexts, that
boxing in the definition on non-fictions, whilst believing there's enough
hygiene that it'll still support "do not harm".

Timo.

On Fri, 9 Aug 2024, 6:33 am Owen Ambur, <owen.ambur@verizon.net> wrote:

> I'm with Dave on this score.
>
> What I'd add is that we human beings should:
>
> a) help Augumented Intelligence (AI) agents do a better job of helping us
> achieve our objectives by rendering our plans in an open, standard,
> machine-readbable format like StratML, and
>
> b) expect them to return the favor by publishing their results in such a
> format, thereby enabling a virtuous cycle of ever-improving prerformance.
>
>
> From my perspective, failure to do as much is an example of artificial
> ignorance
> <https://www.linkedin.com/pulse/artificial-ignorance-owen-ambur/>, and if
> we tolerate it, we'll have no one to blame by ourselves.
>
> I also agree that "good enough" systems won't need huge resources, and to
> minimize such waste, it will be good if politics and goverment can be kept
> out of the process to the greatest degree possible.  Here's what ChatGPT
> has had to say about that
> https://www.linkedin.com/pulse/ai-politics-free-life-owen-ambur-fvs8e/
>
> See also
> https://www.linkedin.com/pulse/consciously-connected-communities-owen-ambur and
> perhaps https://connectedcommunity.net/ & https://search.aboutthem.info/ as
> well.
>
> Owen Ambur
> https://www.linkedin.com/in/owenambur/
>
>
> On Thursday, August 8, 2024 at 01:09:21 PM EDT, Dave Raggett <dsr@w3.org>
> wrote:
>
>
>
> On 8 Aug 2024, at 17:49, Timothy Holborn <timothy.holborn@gmail.com>
> wrote:
>
> I don't think it's safe to work on consciousness tech.  Good application
> development seems to get shut down, whilst the opposite appears to be the
> case for exploitative commodification use cases.
>
>
> I don’t agree, based upon a different conceptualisation of what it might
> mean for an AI system to be sentient, i.e. a system that aware of its
> environment, goals and performance. Such systems need to perceive their
> environment, remember the past and be able to reflect on how well they are
> doing in respect to their goals when it comes to deciding on their actions.
>
> That is pretty concrete in respect to technical requirements. It is also
> safe in respect to the limitations of AI systems to grow their
> capabilities. Good enough AI systems won’t need huge resources as they will
> be sufficient for the tasks they are designed for, just as a nurse working
> in a hospital doesn’t need Ph.D level knowledge of biochemistry.
>
> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Thursday, 8 August 2024 20:52:47 UTC