Re: Open letter urging to pause AI

Hello everyone,

It's a mix of pleasure and intriguingness to be contemporaneous in a time
where the market leader of a given Niche is literally asking the *world* to
stop producing a little bit, even knowing that the Market is thirsty for
the productions.

IMHO, it is like if in the very middle of the first industrial machinery
revolution, the textil best-sellers asked Lyon to stop producing machines.
Intriguing.

Personally, I cannot tell if it is a hype, actually intended to get the
opposite result of the transmitted intention, given this social psychic
disruption altering the way of thinking and being; or, indeed, if it really
projects fears about enormous interpersonal dislocations in terms of
professional activities.
After all, perhaps AI will change our perceptions about the essentiality in
our exchange currencies.

But, as it had being stated,

[...]. Hence, we may struggle to maintain civil order and discourse during
> the disruptions to job markets ahead of us, all while being flooded with an
> unprecedented amount of content (e.g., fake videos) with super-human
> social/cognitive engineering skills.
>

We, as humanity, probably have more priority problems to deal with, if and
only if we could get a certain amount of redirected efforts towards the
right spots.

[ ]'s .


Em qui., 30 de mar. de 2023 às 19:01, adasal <adam.saltiel@gmail.com>
escreveu:

> I'm sorry.
> Let's not fight.
>
> I almost never post here, but the topic caught my eye.
>
> I have just had the experience I refer to with GPT-4 and perplexity.ai.
>
> But I tend to describe things in stripped-down terms, and it is where I am
> now.
> I know I'm not providing context from the usual discussions here.
>
> I am thinking simplistically in game theoretic terms.
>
> Adeel, are you responding to that?
>
> Yes, Dan works for Google.
>
> From the outside, I do not know what that demands of a person.
>
> From the outside, I can only talk about Google, the company as I know it,
> or, if I studied it, as an economic unit. This is the same with all the big
> players.
>
> Adam
>
> Adam Saltiel
>
>
>
> On Thu, Mar 30, 2023 at 9:19 PM Adeel <aahmad1811@gmail.com> wrote:
>
>> How is that offensive? Being racist is offensive.
>> I am merely relating to what he said in a previous message.
>>
>> I would refer the same thing back to you:
>> Please review W3C's Code of Ethics and Professional
>> Conduct:
>> https://www.w3.org/Consortium/cepc/
>> If you cannot contribute respectfully to the discussion, then please
>> refrain from posting.
>>
>> On Thu, 30 Mar 2023 at 21:14, David Booth <david@dbooth.org> wrote:
>>
>> On 3/30/23 15:59, Adeel wrote:
>> > You can't talk about regulation and compliance in this group, dan
>> > doesn't like it as google doesn't care about those things.
>>
>> That is offensive.  Please review W3C's Code of Ethics and Professional
>> Conduct:
>> https://www.w3.org/Consortium/cepc/
>> If you cannot contribute respectfully to the discussion, then please
>> refrain from posting.
>>
>> Thanks,
>> David Booth
>>
>> >
>> > Thanks,
>> >
>> > Adeel
>> >
>> > On Thu, 30 Mar 2023 at 20:22, adasal <adam.saltiel@gmail.com
>> > <mailto:adam.saltiel@gmail.com>> wrote:
>> >
>> >     It's out of the bottle and will be played with.
>> >
>> >     " .. being run on consumer laptops. And that’s not even thinking
>> >     about state level actors .. "
>> >     Large resources will be thrown at this.
>> >
>> >     It was a long time ago that Henry Story (of course, many others too,
>> >     but more in this context) pointed out that, as to what pertains to
>> >     the truth, competing logical deductions cannot decide themselves.
>> >
>> >     I just had this experience, and the details are not important.
>> >
>> >
>> >     The point is that, in this case, I asked the same question to GPT-4
>> >     and perplexity.ai <http://perplexity.ai>, and they gave different
>> >     answers.
>> >     Since it was something I wanted to know the answer to, and it was
>> >     sufficiently complex, I was not in a position to judge which was
>> >     correct.
>> >
>> >     Petitioning for funding for experts, i.e. researchers and university
>> >     professors.
>> >     Although it is absurd to think they would have time to mediate
>> >     between all the obscure information sorting correct from incorrect
>> >     and, of course, a person can be wrong too.
>> >
>> >     Then there is the issue of attribution ...
>> >     At the moment, perplexity.ai <http://perplexity.ai> has a word
>> salad
>> >     of dubious recent publications; GPT -4 has a "knowledge cutoff for
>> >     my training data is September 2021". It finds it difficult to reason
>> >     about time in any case, but these are details.
>> >
>> >     Others in this email thread have cast doubt on Musk's motivation
>> >     (give it time to catch up) and Microsoft (didn't care for any
>> >     consequences by jumping in now).
>> >
>> >     So there are issues of funding and control -- calling on the state
>> >     to intervene is appealing to the power next up the hierarchy, but
>> >     can such regulations be effective when administered by the state?
>> >
>> >     That really just leaves us with grassroots education and everyday
>> >     intervention.
>> >
>> >     Best on an important topic,
>> >
>> >
>> >     Adam
>> >
>> >     Adam Saltiel
>> >
>> >
>> >
>> >     On Wed, Mar 29, 2023 at 9:39 PM Martin Hepp <mfhepp@gmail.com
>> >     <mailto:mfhepp@gmail.com>> wrote:
>> >
>> >         __ I could not agree more with Dan - a “non-proliferation”
>> >         agreement nor a moratorium of AI advancements is simply much
>> >         more unrealistic than it was with nukes. We hardly managed to
>> >         keep the number of crazy people with access to nukes under
>> >         control, but for building your next generation of AI, you will
>> >         not need anything but brain, programming skills, and commodity
>> >         resources. Machines will not take over humankind, but machines
>> >         can add giant levers to single individuals or groups.
>> >
>> >         Best wishes
>> >         Martin
>> >
>> >         ---------------------------------------
>> >         martin hepp
>> >         www: https://www.heppnetz.de/ <https://www.heppnetz.de/>
>> >
>> >
>> >>         On 29. Mar 2023, at 22:30, Dan Brickley <danbri@danbri.org
>> >>         <mailto:danbri@danbri.org>> wrote:
>> >>
>> >>
>> >>
>> >>         On Wed, 29 Mar 2023 at 20:51, ProjectParadigm-ICT-Program
>> >>         <metadataportals@yahoo.com <mailto:metadataportals@yahoo.com>>
>> >>         wrote:
>> >>
>> >>             This letter speaks for itself.
>> >>
>> >>
>> https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
>> <
>> https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
>> >
>> >>
>> >>
>> >>             I may not want to put it as bluntly as Elon Musk, who
>> >>             cautioned against unregulated AI which he called "more
>> >>             dangerous than nukes", but when Nick Bostrom, the late
>> >>             Stephen Hawking, and dozens, no hundreds of international
>> >>             experts, scientists and industry leaders start ringing the
>> >>             bell, is is time to pause and reflect.
>> >>
>> >>             Every aspect of daily life, every industry, education
>> >>             systems, academia and even our cognitive rights will be
>> >>             impacted.
>> >>
>> >>             I would also like to point out that some science fiction
>> >>             authors have done a great job on very accurately
>> >>             predicting a dystopian future ruled by technology, perhaps
>> >>             the greatest of them all being Philip K. Dick.
>> >>
>> >>             But there are dozens of other authors as well and they all
>> >>             give a fairly good impression what awaits us if we do not
>> >>             regulate and control the further development of AI now.
>> >>
>> >>
>> >>         I have a *lot* of worries, but the genie is out of the bottle.
>> >>
>> >>         It’s 60 lines of code for the basics,
>> >>         https://jaykmody.com/blog/gpt-from-scratch/
>> >>         <https://jaykmody.com/blog/gpt-from-scratch/>
>> >>
>> >>         Facebook’s Llama model is out there, and being run on consumer
>> >>         laptops. And that’s not even thinking about state level
>> >>         actors, or how such regulation might be worded.
>> >>
>> >>         For my part (and v personal opinion) I think focussing on
>> >>         education, sensible implementation guidelines, and trying to
>> >>         make sure the good outweighs the bad.
>> >>
>> >>         Dan
>> >>
>> >>
>> >>
>> >>
>> >>             Milton Ponson
>> >>             GSM: +297 747 8280
>> >>             PO Box 1154, Oranjestad
>> >>             Aruba, Dutch Caribbean
>> >>             Project Paradigm: Bringing the ICT tools for sustainable
>> >>             development to all stakeholders worldwide through
>> >>             collaborative research on applied mathematics, advanced
>> >>             modeling, software and standards development
>> >>
>>
>>

-- 
Gabriel Lopes
*Interoperability as Jam's sessions!*
*Each system emanating the music that crosses itself, instrumentalizing
scores and ranges...*
*... of Resonance, vibrations, information, data, symbols, ..., Notes.*

*How interoperable are we with the Music the World continuously offers to
our senses?*
*Maybe it depends on our foundations...?*

Received on Thursday, 30 March 2023 22:25:12 UTC