Re: Open letter urging to pause AI

The common and obvious question most people ask themselves is, what is
knowledge? What is knowing, and how is that placed into the context of society?
Now that we are on the cusp of a technology that seems to know, amidst a flood
 of people's ideas and opinions delivered through electronic media (such as this
as well, of course).
We should ask ourselves what a lack of knowledge is. What is not knowing, that
is to say, really not knowing?
What role does "not to know" play?
Adam Saltiel  





On Sun, Apr 2, 2023 at 3:15 AM Melvin Carvalho <melvincarvalho@gmail.com> 
wrote:


so 1. 4. 2023 v 23:29 odesílatel Adeel <aahmad1811@gmail.com> napsal:
It is a constructive discussion on the general ethics - it is a core competency
in every professionalism. Not left to legal speculation whether it be the use of
transformer models, code of conduct, or selectively defined for others, practice
what you preach which is the real issue with ethics these days.
Perhaps calm down a bit and stay on-topic
As an interesting exercise I ran this tread through chatGPT asking: "rate this
statement as whether or not it is passive agressive on a scale of 1 to 10 and
why" -- give it a try if you want to see some interesting results
What is came up with passive aggression:
Hugh - 3Dan - 5Pat - 4Adeel - 5,7Nathan (control) - 2
Perhaps if run again it would give different numbers.  I expect AI is going to
be integrated into all sorts of tools, so that we'll get flags and pointers in
email before sending.  Which will change the nature of mail lists forever!

As a side note I chat to new developers about semantic web all the time.  One
recently said to me: "I hope stuff like this can popularize linked data without
the smugness".

This is actually one of the more pleasant, jolly threads.  But we probably can
and will do better as a list, when it starts to be common place for AI tools to
help us.
It is a technology that huge implications for the semantic web.  Because the
semantic web is about meaning.  There are times when technology moves too fast,
and has societal implications, so imho, the question is legit.  Indeed in the AI
world this topic has been the big one in the last week, with dozens of different
views.  It's interesting.  Emad has in any case said he thinks gpt v.next will
take 6-9 months to train anyway.
The main limitation I have found is that it can do small tasks reasonably well,
but when you hit the 32k limit, it struggles just as much as we all do with
larger topics or code bases.  Perhaps that will chain soon with plugins tho,
we'll see!

On Sat, 1 Apr 2023 at 22:12, Patrick J. Hayes <phayes@ihmc.org> wrote:
Adeel, greetings. 
The code of conduct applies to us all, including of course Dan B. But the point
here is that Dan has not violated that code, whereas you have, first by directly
insulting Dan by implying that his posts are dishonest or motivated by dark or
sinister commercial forces, and then by continuing this thread of thinly veiled
hostility when there is no reason for it. I am not an administrator of this
group, but if I were you would now be banned from it.  
Please find some more constructive way to contribute to the discussion.  
Pat Hayes

On Apr 1, 2023, at 1:22 PM, Adeel <aahmad1811@gmail.com> wrote:  

Hello,

That sounds like a legal speculation, or do you only selectively discriminate on
group members and their backgrounds when you point that out?  Does the W3C code
of conduct not apply to you after 25+ years of being here?  
Thanks,

Adeel  

On Thu, 30 Mar 2023 at 23:33, Dan Brickley <danbri@danbri.org> wrote:
On Thu, 30 Mar 2023 at 20:59, Adeel <aahmad1811@gmail.com> wrote:
Hello,  
You can't talk about regulation and compliance in this group, dan doesn't like
it as google doesn't care about those things.  
Rather, Google doesn’t delegate to me any permission to speak on its behalf on
these matters, unsurprisingly enough. Google is also and organization of many
thousand employees, with quite some range of views. I choose not to share
several of mine here right now, but I am broadly in favour of sensible laws
where they have some chance of working.  
As in many situations it sometimes makes sense to treat a company as a kind of
pretend person, and sometimes to look at it as a more complex system of forces
and interests. Skim
https://www.wired.com/story/inside-google-three-years-misery-happiest-company-tech/ 
 to see some of what bubbles beneath the surface.  
This email list is just an email list. W3C no longer accords it “interest group“
status, as it did from 1999 when I migrated the older RDF-DEV list to W3C to
form the RDF Interest Group on www-rdf-interest@ list. W3C doesn’t officially
listen here even on RDF topics, let alone areas like modern AI and their social
impact which aren’t obviously our core competency.  
Having been here 25+ years I have some instincts about which topics will just
fill up email inboxes with no ultimate impact on the world and benefit. Right
now “something must be banned” threads on AI look to me to fall in that
category.  
Cheers,  
Dan  
Thanks,  
Adeel  
On Thu, 30 Mar 2023 at 20:22, adasal <adam.saltiel@gmail.com> wrote:
It's out of the bottle and will be played with. 
" .. being run on consumer laptops. And that’s not even thinking about state
level actors .. "  Large resources will be thrown at this.  
It was a long time ago that Henry Story (of course, many others too, but more in
this context) pointed out that, as to what pertains to the truth, competing
logical deductions cannot decide themselves.  
I just had this experience, and the details are not important.  

The point is that, in this case, I asked the same question to GPT-4 and 
perplexity.ai, and they gave different answers.  Since it was something I wanted
to know the answer to, and it was sufficiently complex, I was not in a position
to judge which was correct.  
Petitioning for funding for experts, i.e.researchers and university professors. 
 Although it is absurd to think they would have time to mediate between all the
obscure information sorting correct from incorrect and, of course, a person can
be wrong too.  
Then there is the issue of attribution ...  At the moment, perplexity.ai  has a
word salad of dubious recent publications; GPT -4 has a "knowledge cutoff for my
training data is September 2021". It finds it difficult to reason about time in
any case, but these are details.  
Others in this email thread have cast doubt on Musk's motivation (give it time
to catch up) and Microsoft (didn't care for any consequences by jumping in now). 
 
So there are issues of funding and control -- calling on the state to intervene
is appealing to the power next up the hierarchy, but can such regulations be
effective when administered by the state?  
That really just leaves us with grassroots education and everyday intervention. 
 
Best on an important topic,  

Adam  
Adam Saltiel  





On Wed, Mar 29, 2023 at 9:39 PM Martin Hepp <mfhepp@gmail.com>  wrote:
I could not agree more with Dan - a “non-proliferation” agreement nor a
moratorium of AI advancements is simply much more unrealistic than it was with
nukes. We hardly managed to keep the number of crazy people with access to nukes
under control, but for building your next generation of AI, you will not need
anything but brain, programming skills, and commodity resources. Machines will
not take over humankind, but machines can add giant levers to single individuals
or groups.

Best wishes Martin  
--------------------------------------- martin hepp  www:
https://www.heppnetz.de/  

On 29. Mar 2023, at 22:30, Dan Brickley <danbri@danbri.org> wrote:



On Wed, 29 Mar 2023 at 20:51, ProjectParadigm-ICT-Program <
metadataportals@yahoo.com> wrote:
This letter speaks for itself.

https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/ 
 

I may not want to put it as bluntly as Elon Musk, who cautioned against
unregulated AI which he called "more dangerous than nukes", but when Nick
Bostrom, the late Stephen Hawking, and dozens, no hundreds of international
experts, scientists and industry leaders start ringing the bell, is is time to
pause and reflect.  
Every aspect of daily life, every industry, education systems, academia and even
our cognitive rights will be impacted.  
I would also like to point out that some science fiction authors have done a
great job on very accurately predicting a dystopian future ruled by technology,
perhaps the greatest of them all being Philip K. Dick.  
But there are dozens of other authors as well and they all give a fairly good
impression what awaits us if we do not regulate and control the further
development of AI now.  
I have a *lot* of worries, but the genie is out of the bottle.  
It’s 60 lines of code for the basics,https://jaykmody.com/blog/gpt-from-scratch/ 
 
Facebook’s Llama model is out there, and being run on consumer laptops. And
that’s not even thinking about state level actors, or how such regulation might
be worded.  
For my part (and v personal opinion) I think focussing on education, sensible
implementation guidelines, and trying to make sure the good outweighs the bad.  
Dan  



Milton Ponson  GSM: +297 747 8280  PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all
stakeholders worldwide through collaborative research on applied mathematics,
advanced modeling, software and standards development

Received on Sunday, 2 April 2023 13:49:13 UTC