Re: Open letter urging to pause AI

How is that offensive? Being racist is offensive.
I am merely relating to what he said in a previous message.

I would refer the same thing back to you:
Please review W3C's Code of Ethics and Professional
Conduct:
https://www.w3.org/Consortium/cepc/
If you cannot contribute respectfully to the discussion, then please
refrain from posting.

On Thu, 30 Mar 2023 at 21:14, David Booth <david@dbooth.org> wrote:

> On 3/30/23 15:59, Adeel wrote:
> > You can't talk about regulation and compliance in this group, dan
> > doesn't like it as google doesn't care about those things.
>
> That is offensive.  Please review W3C's Code of Ethics and Professional
> Conduct:
> https://www.w3.org/Consortium/cepc/
> If you cannot contribute respectfully to the discussion, then please
> refrain from posting.
>
> Thanks,
> David Booth
>
> >
> > Thanks,
> >
> > Adeel
> >
> > On Thu, 30 Mar 2023 at 20:22, adasal <adam.saltiel@gmail.com
> > <mailto:adam.saltiel@gmail.com>> wrote:
> >
> >     It's out of the bottle and will be played with.
> >
> >     " .. being run on consumer laptops. And that’s not even thinking
> >     about state level actors .. "
> >     Large resources will be thrown at this.
> >
> >     It was a long time ago that Henry Story (of course, many others too,
> >     but more in this context) pointed out that, as to what pertains to
> >     the truth, competing logical deductions cannot decide themselves.
> >
> >     I just had this experience, and the details are not important.
> >
> >
> >     The point is that, in this case, I asked the same question to GPT-4
> >     and perplexity.ai <http://perplexity.ai>, and they gave different
> >     answers.
> >     Since it was something I wanted to know the answer to, and it was
> >     sufficiently complex, I was not in a position to judge which was
> >     correct.
> >
> >     Petitioning for funding for experts, i.e. researchers and university
> >     professors.
> >     Although it is absurd to think they would have time to mediate
> >     between all the obscure information sorting correct from incorrect
> >     and, of course, a person can be wrong too.
> >
> >     Then there is the issue of attribution ...
> >     At the moment, perplexity.ai <http://perplexity.ai> has a word salad
> >     of dubious recent publications; GPT -4 has a "knowledge cutoff for
> >     my training data is September 2021". It finds it difficult to reason
> >     about time in any case, but these are details.
> >
> >     Others in this email thread have cast doubt on Musk's motivation
> >     (give it time to catch up) and Microsoft (didn't care for any
> >     consequences by jumping in now).
> >
> >     So there are issues of funding and control -- calling on the state
> >     to intervene is appealing to the power next up the hierarchy, but
> >     can such regulations be effective when administered by the state?
> >
> >     That really just leaves us with grassroots education and everyday
> >     intervention.
> >
> >     Best on an important topic,
> >
> >
> >     Adam
> >
> >     Adam Saltiel
> >
> >
> >
> >     On Wed, Mar 29, 2023 at 9:39 PM Martin Hepp <mfhepp@gmail.com
> >     <mailto:mfhepp@gmail.com>> wrote:
> >
> >         __ I could not agree more with Dan - a “non-proliferation”
> >         agreement nor a moratorium of AI advancements is simply much
> >         more unrealistic than it was with nukes. We hardly managed to
> >         keep the number of crazy people with access to nukes under
> >         control, but for building your next generation of AI, you will
> >         not need anything but brain, programming skills, and commodity
> >         resources. Machines will not take over humankind, but machines
> >         can add giant levers to single individuals or groups.
> >
> >         Best wishes
> >         Martin
> >
> >         ---------------------------------------
> >         martin hepp
> >         www: https://www.heppnetz.de/ <https://www.heppnetz.de/>
> >
> >
> >>         On 29. Mar 2023, at 22:30, Dan Brickley <danbri@danbri.org
> >>         <mailto:danbri@danbri.org>> wrote:
> >>
> >>
> >>
> >>         On Wed, 29 Mar 2023 at 20:51, ProjectParadigm-ICT-Program
> >>         <metadataportals@yahoo.com <mailto:metadataportals@yahoo.com>>
> >>         wrote:
> >>
> >>             This letter speaks for itself.
> >>
> >>
> https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
> <
> https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
> >
> >>
> >>
> >>             I may not want to put it as bluntly as Elon Musk, who
> >>             cautioned against unregulated AI which he called "more
> >>             dangerous than nukes", but when Nick Bostrom, the late
> >>             Stephen Hawking, and dozens, no hundreds of international
> >>             experts, scientists and industry leaders start ringing the
> >>             bell, is is time to pause and reflect.
> >>
> >>             Every aspect of daily life, every industry, education
> >>             systems, academia and even our cognitive rights will be
> >>             impacted.
> >>
> >>             I would also like to point out that some science fiction
> >>             authors have done a great job on very accurately
> >>             predicting a dystopian future ruled by technology, perhaps
> >>             the greatest of them all being Philip K. Dick.
> >>
> >>             But there are dozens of other authors as well and they all
> >>             give a fairly good impression what awaits us if we do not
> >>             regulate and control the further development of AI now.
> >>
> >>
> >>         I have a *lot* of worries, but the genie is out of the bottle.
> >>
> >>         It’s 60 lines of code for the basics,
> >>         https://jaykmody.com/blog/gpt-from-scratch/
> >>         <https://jaykmody.com/blog/gpt-from-scratch/>
> >>
> >>         Facebook’s Llama model is out there, and being run on consumer
> >>         laptops. And that’s not even thinking about state level
> >>         actors, or how such regulation might be worded.
> >>
> >>         For my part (and v personal opinion) I think focussing on
> >>         education, sensible implementation guidelines, and trying to
> >>         make sure the good outweighs the bad.
> >>
> >>         Dan
> >>
> >>
> >>
> >>
> >>             Milton Ponson
> >>             GSM: +297 747 8280
> >>             PO Box 1154, Oranjestad
> >>             Aruba, Dutch Caribbean
> >>             Project Paradigm: Bringing the ICT tools for sustainable
> >>             development to all stakeholders worldwide through
> >>             collaborative research on applied mathematics, advanced
> >>             modeling, software and standards development
> >>
>

Received on Thursday, 30 March 2023 20:20:06 UTC