Re: Open letter urging to pause AI

Milton, Rumsfeld's conceptualization of unknown unknowns had also come to my mind in this context.  
The fact that you raised it prompted me to discover Johari's Window, the elements of which are now available in StratML format at https://stratml.us/drybridge/index.htm#JHRW
They coincide nicely with my conceptual plan for the AboutThem.info domain.
The values (potential Johari Window adjectives) documented thus far in the StratML collection are available in order of frequency at https://www.linkedin.com/pulse/values-goals-partnerships-owen-ambur/
While various listings of AI/ML applications and depictions of their logos may be interesting, at least to get a sense of the number of them, it would be more useful to have an index of their performance plans and reports, along the lines of my StratML-enabled query service at https://search.aboutthem.info/
Owen Amburhttps://www.linkedin.com/in/owenambur/
 

    On Monday, April 3, 2023 at 02:28:14 PM EDT, ProjectParadigm-ICT-Program <metadataportals@yahoo.com> wrote:  
 
 I never expected my post to elicit such an amount of response. But for direction's sake and for keeping on topic, let's take a look at the takeaways.
- LLMs aren't going away anywhere soon, especially with the insanely crazy amounts of money being invested in it, having started an AI arms race of sorts;
- the software industry and software giants like Microsoft and OpenAI couldn't care less about regulation and factor in enormous fines as the cost of doing business when making huge profits;
- the FutureofLife Open Letter to Pause Giant AI Experiments, more closely looked at has some flaws;- Timnit Gebru, Emily M. Bender, Angelina McMillan-Major and Margaret Mitchell were not found on the list of signatories to the letter and published a rebuke, calling out the letter's failure to engage with existing problems caused by the tech, in particular the ideology of longtermism that drives much of current AI development.Source: https://techcrunch.com/2023/03/31/ethicists-fire-back-at-ai-pause-letter-they-say-ignores-the-actual-harms/- all efforts to develop AI that meets with ethical guidelines, standards, is open, explainable, controllable, trustworthy and safe, whether from the UNESCO/UN, standards bodies, professional regulatory bodies and professional associations by default are lagging behind;- the potential users of a seemingly endless list of applications across a wide board of industries of the LLMs are not fully aware of the capabilities and potential unintended, or not disclosed secondary impacts;- there are plenty of unknown unknowns, like emerging abilities now being studied to comprehend and get a grasp on what the LLMs are capable of and consequently the unintended, potentially harmful capabilities that in certain cases need to be contained, avoided or disabled;- the huge carbon and energy footprints of the infrastructure for the global wide scale deployment of such technologies.

All in all, in my humble opinion the best approach to tackle this issue is to have both professionals inside the industry knowledgeable of the problems, dangers and unknowns, together with professionals from the fields of software engineering, computer science, mathematics and all fields and industry currently using AI as a key technology (e.g. pharmaceutical industry, biotechnologies, bio and life sciences, medical technologies, natural and geo sciences, chemistry and material sciences) take a good look at all the current proposals for guidelines for ethical, explainable, open, safe and trustworthy AI, as they relate to their industries, field of activities or scientific endeavors and condense these to an actionable list for politicians both at national, regional and global body levels to work on.

And in the meantime have educators, academics, scientists, engineers and professionals from industries using AI technologies stay on topic and contribute to an open dialogue about the issues at hand.
The issue is not to stop the development and deployment of AI but to do so in a responsible way, based on some agreed upon consensus on guiding principles.


Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Saturday, April 1, 2023 at 10:15:41 PM AST, Melvin Carvalho <melvincarvalho@gmail.com> wrote:  
 
 

so 1. 4. 2023 v 23:29 odesílatel Adeel <aahmad1811@gmail.com> napsal:

It is a constructive discussion on the general ethics - it is a core competency in every professionalism. Not left to legal speculation whether it be the use of transformer models, code of conduct, or selectively defined for others, practice what you preach which is the real issue with ethics these days.

Perhaps calm down a bit and stay on-topic
As an interesting exercise I ran this tread through chatGPT asking: "rate this statement as whether or not it is passive agressive on a scale of 1 to 10 and why" -- give it a try if you want to see some interesting results
What is came up with passive aggression:
Hugh - 3Dan - 5Pat - 4Adeel - 5,7Nathan (control) - 2
Perhaps if run again it would give different numbers.  I expect AI is going to be integrated into all sorts of tools, so that we'll get flags and pointers in email before sending.  Which will change the nature of mail lists forever!

As a side note I chat to new developers about semantic web all the time.  One recently said to me: "I hope stuff like this can popularize linked data without the smugness".

This is actually one of the more pleasant, jolly threads.  But we probably can and will do better as a list, when it starts to be common place for AI tools to help us.
It is a technology that huge implications for the semantic web.  Because the semantic web is about meaning.  There are times when technology moves too fast, and has societal implications, so imho, the question is legit.  Indeed in the AI world this topic has been the big one in the last week, with dozens of different views.  It's interesting.  Emad has in any case said he thinks gpt v.next will take 6-9 months to train anyway.
The main limitation I have found is that it can do small tasks reasonably well, but when you hit the 32k limit, it struggles just as much as we all do with larger topics or code bases.  Perhaps that will chain soon with plugins tho, we'll see!
 [deleted]    

Received on Tuesday, 4 April 2023 00:37:46 UTC