Re: Open letter urging to pause AI

po 3. 4. 2023 v 20:27 odesílatel ProjectParadigm-ICT-Program <
metadataportals@yahoo.com> napsal:

> I never expected my post to elicit such an amount of response. But for
> direction's sake and for keeping on topic, let's take a look at the
> takeaways.
>
> - LLMs aren't going away anywhere soon, especially with the insanely crazy
> amounts of money being invested in it, having started an AI arms race of
> sorts;
> - the software industry and software giants like Microsoft and OpenAI
> couldn't care less about regulation and factor in enormous fines as the
> cost of doing business when making huge profits;
> - the FutureofLife Open Letter to Pause Giant AI Experiments, more closely
> looked at has some flaws;
> - Timnit Gebru, Emily M. Bender, Angelina McMillan-Major and Margaret
> Mitchell were not found on the list of signatories to the letter and
> published a rebuke, calling out the letter's failure to engage with
> existing problems caused by the tech, in particular the ideology of
> longtermism that drives much of current AI development.
> Source:
> https://techcrunch.com/2023/03/31/ethicists-fire-back-at-ai-pause-letter-they-say-ignores-the-actual-harms/
> - all efforts to develop AI that meets with ethical guidelines, standards,
> is open, explainable, controllable, trustworthy and safe, whether from the
> UNESCO/UN, standards bodies, professional regulatory bodies and
> professional associations by default are lagging behind;
> - the potential users of a seemingly endless list of applications across a
> wide board of industries of the LLMs are not fully aware of the
> capabilities and potential unintended, or not disclosed secondary impacts;
> - there are plenty of unknown unknowns, like emerging abilities now being
> studied to comprehend and get a grasp on what the LLMs are capable of and
> consequently the unintended, potentially harmful capabilities that in
> certain cases need to be contained, avoided or disabled;
> - the huge carbon and energy footprints of the infrastructure for the
> global wide scale deployment of such technologies.
>
> All in all, in my humble opinion the best approach to tackle this issue is
> to have both professionals inside the industry knowledgeable of the
> problems, dangers and unknowns, together with professionals from the fields
> of software engineering, computer science, mathematics and all fields and
> industry currently using AI as a key technology (e.g. pharmaceutical
> industry, biotechnologies, bio and life sciences, medical technologies,
> natural and geo sciences, chemistry and material sciences) take a good look
> at all the current proposals for guidelines for ethical, explainable, open,
> safe and trustworthy AI, as they relate to their industries, field of
> activities or scientific endeavors and condense these to an actionable list
> for politicians both at national, regional and global body levels to work
> on.
>
> And in the meantime have educators, academics, scientists, engineers and
> professionals from industries using AI technologies stay on topic and
> contribute to an open dialogue about the issues at hand.
>
> The issue is not to stop the development and deployment of AI but to do so
> in a responsible way, based on some agreed upon consensus on guiding
> principles.
>

I've lately been looking at this book:

https://en.wikipedia.org/wiki/The_Technological_Society

It argues that technology becomes more and more efficient in achieving its
intents

For example, the web, once it discovered the profit motive of selling
adverts, become more efficient at it every year.  Ellus argues that this is
not the function of one given company, but of the technology itself.

AI could be a catalyst to this.  It might not be stoppable, but it could be
slowed down.  There is also a counter balance to this, and that AI could
work for humanity together with open platforms, not owned by a single
company.

IMHO we do need that counter balance.  We tried to create something along
these lines in the Solid project, which is the semantic web with a social
dimension.  I think we need more things like this, to see the more human
side of AI, to balance the commercial side.


>
>
> Milton Ponson
> GSM: +297 747 8280
> PO Box 1154, Oranjestad
> Aruba, Dutch Caribbean
> Project Paradigm: Bringing the ICT tools for sustainable development to
> all stakeholders worldwide through collaborative research on applied
> mathematics, advanced modeling, software and standards development
>
>
> On Saturday, April 1, 2023 at 10:15:41 PM AST, Melvin Carvalho <
> melvincarvalho@gmail.com> wrote:
>
>
>
>
> so 1. 4. 2023 v 23:29 odesílatel Adeel <aahmad1811@gmail.com> napsal:
>
> It is a constructive discussion on the general ethics - it is a core
> competency in every professionalism. Not left to legal speculation whether
> it be the use of transformer models, code of conduct, or selectively
> defined for others, practice what you preach which is the real issue with
> ethics these days.
>
>
> Perhaps calm down a bit and stay on-topic
>
> As an interesting exercise I ran this tread through chatGPT asking: "rate
> this statement as whether or not it is passive agressive on a scale of 1 to
> 10 and why" -- give it a try if you want to see some interesting results
>
> What is came up with passive aggression:
>
> Hugh - 3
> Dan - 5
> Pat - 4
> Adeel - 5,7
> Nathan (control) - 2
>
> Perhaps if run again it would give different numbers.  I expect AI is
> going to be integrated into all sorts of tools, so that we'll get flags and
> pointers in email before sending.  Which will change the nature of mail
> lists forever!
>
> As a side note I chat to new developers about semantic web all the time.
> One recently said to me: "I hope stuff like this can popularize linked data
> without the smugness".
>
> This is actually one of the more pleasant, jolly threads.  But we probably
> can and will do better as a list, when it starts to be common place for AI
> tools to help us.
>
> It is a technology that huge implications for the semantic web.  Because
> the semantic web is about meaning.  There are times when technology moves
> too fast, and has societal implications, so imho, the question is legit.
> Indeed in the AI world this topic has been the big one in the last week,
> with dozens of different views.  It's interesting.  Emad has in any case
> said he thinks gpt v.next will take 6-9 months to train anyway.
>
> The main limitation I have found is that it can do small tasks reasonably
> well, but when you hit the 32k limit, it struggles just as much as we all
> do with larger topics or code bases.  Perhaps that will chain soon with
> plugins tho, we'll see!
>
>
>
> On Sat, 1 Apr 2023 at 22:12, Patrick J. Hayes <phayes@ihmc.org> wrote:
>
> Adeel, greetings.
>
> The code of conduct applies to us all, including of course Dan B. But the
> point here is that Dan has not violated that code, whereas you have, first
> by directly insulting Dan by implying that his posts are dishonest or
> motivated by dark or sinister commercial forces, and then by continuing
> this thread of thinly veiled hostility when there is no reason for it. I am
> not an administrator of this group, but if I were you would now be banned
> from it.
>
> Please find some more constructive way to contribute to the discussion.
>
> Pat Hayes
>
> On Apr 1, 2023, at 1:22 PM, Adeel <aahmad1811@gmail.com> wrote:
>
>
> Hello,
>
> That sounds like a legal speculation, or do you only selectively
> discriminate on group members and their backgrounds when you point that out?
> Does the W3C code of conduct not apply to you after 25+ years of being
> here?
>
> Thanks,
>
> Adeel
>
>
> On Thu, 30 Mar 2023 at 23:33, Dan Brickley <danbri@danbri.org> wrote:
>
> On Thu, 30 Mar 2023 at 20:59, Adeel <aahmad1811@gmail.com> wrote:
>
> Hello,
>
> You can't talk about regulation and compliance in this group, dan doesn't
> like it as google doesn't care about those things.
>
>
> Rather, Google doesn’t delegate to me any permission to speak on its
> behalf on these matters, unsurprisingly enough. Google is also and
> organization of many thousand employees, with quite some range of views. I
> choose not to share several of mine here right now, but I am broadly in
> favour of sensible laws where they have some chance of working.
>
> As in many situations it sometimes makes sense to treat a company as a
> kind of pretend person, and sometimes to look at it as a more complex
> system of forces and interests. Skim
>
> https://www.wired.com/story/inside-google-three-years-misery-happiest-company-tech/
> <https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.wired.com%2Fstory%2Finside-google-three-years-misery-happiest-company-tech%2F&data=05%7C01%7Cphayes%40ihmc.us%7Cd884dfb499ef423f124f08db32ef753e%7C2b38115bebad4aba9ea3b3779d8f4f43%7C1%7C0%7C638159776297688875%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=akFFkIcIXEgvQWjOdtQO4QjZOzAjTOJ82zH1mJwAb%2Bo%3D&reserved=0>
> to see some of what bubbles beneath the surface.
>
> This email list is just an email list. W3C no longer accords it “interest
> group“ status, as it did from 1999 when I migrated the older RDF-DEV list
> to W3C to form the RDF Interest Group on www-rdf-interest@ list. W3C
> doesn’t officially listen here even on RDF topics, let alone areas like
> modern AI and their social impact which aren’t obviously our core
> competency.
>
> Having been here 25+ years I have some instincts about which topics will
> just fill up email inboxes with no ultimate impact on the world and
> benefit. Right now “something must be banned” threads on AI look to me to
> fall in that category.
>
> Cheers,
>
> Dan
>
>
>
>
> Thanks,
>
> Adeel
>
> On Thu, 30 Mar 2023 at 20:22, adasal <adam.saltiel@gmail.com> wrote:
>
> It's out of the bottle and will be played with.
>
> " .. being run on consumer laptops. And that’s not even thinking about
> state level actors .. "
> Large resources will be thrown at this.
>
> It was a long time ago that Henry Story (of course, many others too, but
> more in this context) pointed out that, as to what pertains to the truth,
> competing logical deductions cannot decide themselves.
>
> I just had this experience, and the details are not important.
>
>
> The point is that, in this case, I asked the same question to GPT-4 and
> perplexity.ai
> <https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fperplexity.ai%2F&data=05%7C01%7Cphayes%40ihmc.us%7Cd884dfb499ef423f124f08db32ef753e%7C2b38115bebad4aba9ea3b3779d8f4f43%7C1%7C0%7C638159776297688875%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=twBLGhgQjubMJb6eq223Hsx8kKJMs8Tiq7jMBfpFyPc%3D&reserved=0>,
> and they gave different answers.
> Since it was something I wanted to know the answer to, and it was
> sufficiently complex, I was not in a position to judge which was correct.
>
> Petitioning for funding for experts, i.e. researchers and university
> professors.
> Although it is absurd to think they would have time to mediate between all
> the obscure information sorting correct from incorrect and, of course, a
> person can be wrong too.
>
> Then there is the issue of attribution ...
> At the moment, perplexity.ai
> <https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fperplexity.ai%2F&data=05%7C01%7Cphayes%40ihmc.us%7Cd884dfb499ef423f124f08db32ef753e%7C2b38115bebad4aba9ea3b3779d8f4f43%7C1%7C0%7C638159776297688875%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=twBLGhgQjubMJb6eq223Hsx8kKJMs8Tiq7jMBfpFyPc%3D&reserved=0>
> has a word salad of dubious recent publications; GPT -4 has a "knowledge
> cutoff for my training data is September 2021". It finds it difficult to
> reason about time in any case, but these are details.
>
> Others in this email thread have cast doubt on Musk's motivation (give it
> time to catch up) and Microsoft (didn't care for any consequences by
> jumping in now).
>
> So there are issues of funding and control -- calling on the state to
> intervene is appealing to the power next up the hierarchy, but can such
> regulations be effective when administered by the state?
>
> That really just leaves us with grassroots education and everyday
> intervention.
>
> Best on an important topic,
>
>
> Adam
>
> Adam Saltiel
>
>
>
> On Wed, Mar 29, 2023 at 9:39 PM Martin Hepp <mfhepp@gmail.com> wrote:
>
> I could not agree more with Dan - a “non-proliferation” agreement nor a
> moratorium of AI advancements is simply much more unrealistic than it was
> with nukes. We hardly managed to keep the number of crazy people with
> access to nukes under control, but for building your next generation of AI,
> you will not need anything but brain, programming skills, and commodity
> resources. Machines will not take over humankind, but machines can add
> giant levers to single individuals or groups.
>
> Best wishes
> Martin
>
> ---------------------------------------
> martin hepp
> www:  https://www.heppnetz.de/
> <https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.heppnetz.de%2F&data=05%7C01%7Cphayes%40ihmc.us%7Cd884dfb499ef423f124f08db32ef753e%7C2b38115bebad4aba9ea3b3779d8f4f43%7C1%7C0%7C638159776297845108%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=FCqp2o420lD%2FDJA%2BsW2xmUV1SL4eb4FqubE%2FFbQiwv0%3D&reserved=0>
>
>
> On 29. Mar 2023, at 22:30, Dan Brickley <danbri@danbri.org> wrote:
>
>
>
> On Wed, 29 Mar 2023 at 20:51, ProjectParadigm-ICT-Program <
> metadataportals@yahoo.com> wrote:
>
> This letter speaks for itself.
>
>
> https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
> <https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.reuters.com%2Ftechnology%2Fmusk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29%2F&data=05%7C01%7Cphayes%40ihmc.us%7Cd884dfb499ef423f124f08db32ef753e%7C2b38115bebad4aba9ea3b3779d8f4f43%7C1%7C0%7C638159776297845108%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=n%2BG6aomh5Ad6qwdk9euMYmYvQFLs0dSYFMpwJ%2FquyPo%3D&reserved=0>
>
>
> I may not want to put it as bluntly as Elon Musk, who cautioned against
> unregulated AI which he called "more dangerous than nukes", but when Nick
> Bostrom, the late Stephen Hawking, and dozens, no hundreds of international
> experts, scientists and industry leaders start ringing the bell, is is time
> to pause and reflect.
>
> Every aspect of daily life, every industry, education systems, academia
> and even our cognitive rights will be impacted.
>
> I would also like to point out that some science fiction authors have done
> a great job on very accurately predicting a dystopian future ruled by
> technology, perhaps the greatest of them all being Philip K. Dick.
>
> But there are dozens of other authors as well and they all give a fairly
> good impression what awaits us if we do not regulate and control the
> further development of AI now.
>
>
> I have a *lot* of worries, but the genie is out of the bottle.
>
> It’s 60 lines of code for the basics,
> https://jaykmody.com/blog/gpt-from-scratch/
> <https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjaykmody.com%2Fblog%2Fgpt-from-scratch%2F&data=05%7C01%7Cphayes%40ihmc.us%7Cd884dfb499ef423f124f08db32ef753e%7C2b38115bebad4aba9ea3b3779d8f4f43%7C1%7C0%7C638159776297845108%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=2QEWpct5NS36zpwh2zyEISddXRBZMJeJtvMcQSJ8HME%3D&reserved=0>
>
> Facebook’s Llama model is out there, and being run on consumer laptops.
> And that’s not even thinking about state level actors, or how such
> regulation might be worded.
>
> For my part (and v personal opinion) I think focussing on education,
> sensible implementation guidelines, and trying to make sure the good
> outweighs the bad.
>
> Dan
>
>
>
>
> Milton Ponson
> GSM: +297 747 8280
> PO Box 1154, Oranjestad
> Aruba, Dutch Caribbean
> Project Paradigm: Bringing the ICT tools for sustainable development to
> all stakeholders worldwide through collaborative research on applied
> mathematics, advanced modeling, software and standards development
>
>
>

Received on Tuesday, 4 April 2023 00:32:49 UTC