Re: Open letter urging to pause AI

Not all AI need be borg like[1]. there's different types of thoughtware
that can be made, considered[2]

I don't see alot of work on decentralised spatio-temporal graph/vector db
techniques; to support knowledge fabrics, building firstly upon support for
all human languages of prayer, as a foundational requirement to support
natural language ontology development; and in-turn, personal, private human
centric ai agents; to support personal ontology...

understanding such sorts of complex considerations aren't really part of
what many focus on[3], but in-order to progress the digital transformation
agenda & related digital compact goals, the way these large language models
coupled to pre-existing w3c works, providing simple identifiers into
government operated[4] llms, may not be the best way to achieve materially
positive outcomes, which is hard to do[5][6].

Not all AI tech is 'LLM' Transformer based.

FWIW: found this Cognitive AI discord group: https://discord.gg/yqaBG5rh4j

An enormous amount of human factors have seemingly been set aside; and the
development of remarkable war machines, aren't necessarily fit-for-purpose
for peace.

Cheers,

timothy holborn.


[1] https://lists.w3.org/Archives/Public/public-cogai/2023Mar/0008.html
[2]
https://cdn.knightlab.com/libs/timeline3/latest/embed/index.html?source=1r-bo83ImIEjSCmOFFMcT7F79OnCHDOGdkC_g9bOVFZg&font=Default&lang=en&hash_bookmark=true&initial_zoom=4&height=650#event-consciousness-studies

[3] https://play.itu.int/events/category/wsis-forum-2023/2023-03/
[4]
https://play.itu.int/event/wsis-forum-2023-govstack-cio-digital-leaders-forum/

[5]
https://docs.google.com/document/d/1D63FlICIOXcnLx_PYs0ByLYvc1B-BXqhhXGdaW_rIMI/edit

[6]
https://docs.google.com/presentation/d/1cq8rI71tR31vpklPiHcvoAqVC5EeGtD0M05DBedp1nE/edit


On Fri, 31 Mar 2023 at 05:59, Adeel <aahmad1811@gmail.com> wrote:

> Hello,
>
> You can't talk about regulation and compliance in this group, dan doesn't
> like it as google doesn't care about those things.
>
> Thanks,
>
> Adeel
>
> On Thu, 30 Mar 2023 at 20:22, adasal <adam.saltiel@gmail.com> wrote:
>
>> It's out of the bottle and will be played with.
>>
>> " .. being run on consumer laptops. And that’s not even thinking about
>> state level actors .. "
>> Large resources will be thrown at this.
>>
>> It was a long time ago that Henry Story (of course, many others too, but
>> more in this context) pointed out that, as to what pertains to the truth,
>> competing logical deductions cannot decide themselves.
>>
>> I just had this experience, and the details are not important.
>>
>>
>> The point is that, in this case, I asked the same question to GPT-4 and
>> perplexity.ai, and they gave different answers.
>> Since it was something I wanted to know the answer to, and it was
>> sufficiently complex, I was not in a position to judge which was correct.
>>
>> Petitioning for funding for experts, i.e. researchers and university
>> professors.
>> Although it is absurd to think they would have time to mediate between
>> all the obscure information sorting correct from incorrect and, of course,
>> a person can be wrong too.
>>
>> Then there is the issue of attribution ...
>> At the moment, perplexity.ai has a word salad of dubious recent
>> publications; GPT -4 has a "knowledge cutoff for my training data is
>> September 2021". It finds it difficult to reason about time in any case,
>> but these are details.
>>
>> Others in this email thread have cast doubt on Musk's motivation (give it
>> time to catch up) and Microsoft (didn't care for any consequences by
>> jumping in now).
>>
>> So there are issues of funding and control -- calling on the state to
>> intervene is appealing to the power next up the hierarchy, but can such
>> regulations be effective when administered by the state?
>>
>> That really just leaves us with grassroots education and everyday
>> intervention.
>>
>> Best on an important topic,
>>
>>
>> Adam
>>
>> Adam Saltiel
>>
>>
>>
>> On Wed, Mar 29, 2023 at 9:39 PM Martin Hepp <mfhepp@gmail.com> wrote:
>>
>>> I could not agree more with Dan - a “non-proliferation” agreement nor a
>>> moratorium of AI advancements is simply much more unrealistic than it was
>>> with nukes. We hardly managed to keep the number of crazy people with
>>> access to nukes under control, but for building your next generation of AI,
>>> you will not need anything but brain, programming skills, and commodity
>>> resources. Machines will not take over humankind, but machines can add
>>> giant levers to single individuals or groups.
>>>
>>> Best wishes
>>> Martin
>>>
>>> ---------------------------------------
>>> martin hepp
>>> www:  https://www.heppnetz.de/
>>>
>>>
>>> On 29. Mar 2023, at 22:30, Dan Brickley <danbri@danbri.org> wrote:
>>>
>>>
>>>
>>> On Wed, 29 Mar 2023 at 20:51, ProjectParadigm-ICT-Program <
>>> metadataportals@yahoo.com> wrote:
>>>
>>> This letter speaks for itself.
>>>
>>>
>>> https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
>>>
>>>
>>> I may not want to put it as bluntly as Elon Musk, who cautioned against
>>> unregulated AI which he called "more dangerous than nukes", but when Nick
>>> Bostrom, the late Stephen Hawking, and dozens, no hundreds of international
>>> experts, scientists and industry leaders start ringing the bell, is is time
>>> to pause and reflect.
>>>
>>> Every aspect of daily life, every industry, education systems, academia
>>> and even our cognitive rights will be impacted.
>>>
>>> I would also like to point out that some science fiction authors have
>>> done a great job on very accurately predicting a dystopian future ruled by
>>> technology, perhaps the greatest of them all being Philip K. Dick.
>>>
>>> But there are dozens of other authors as well and they all give a fairly
>>> good impression what awaits us if we do not regulate and control the
>>> further development of AI now.
>>>
>>>
>>> I have a *lot* of worries, but the genie is out of the bottle.
>>>
>>> It’s 60 lines of code for the basics,
>>> https://jaykmody.com/blog/gpt-from-scratch/
>>>
>>> Facebook’s Llama model is out there, and being run on consumer laptops.
>>> And that’s not even thinking about state level actors, or how such
>>> regulation might be worded.
>>>
>>> For my part (and v personal opinion) I think focussing on education,
>>> sensible implementation guidelines, and trying to make sure the good
>>> outweighs the bad.
>>>
>>> Dan
>>>
>>>
>>>
>>>
>>> Milton Ponson
>>> GSM: +297 747 8280
>>> PO Box 1154, Oranjestad
>>> Aruba, Dutch Caribbean
>>> Project Paradigm: Bringing the ICT tools for sustainable development to
>>> all stakeholders worldwide through collaborative research on applied
>>> mathematics, advanced modeling, software and standards development
>>>
>>>

Received on Thursday, 30 March 2023 20:27:23 UTC