Re: unprecendente, hanging onto to knowledge models before AI takes them down

Unsurprisingly, ChatGPT agrees with my suggestion:

From my perspective, the most hopeful and potentially productive approach might be to expect, if not require the developers of LLMs to publish their performance plans and reports in an open, standard, machine-readable format, like StratML.  Doing so would not only enable semi-automatic creation of a public registry of such applications but also enable stakeholders -- including  research labs, civil society groups, and industry watchdogs -- to ensure responsible behavior.

The four points it makes in support of that suggestion are available here, along with four ideas for next steps.
Owen Amburhttps://www.linkedin.com/in/owenambur/
 

    On Saturday, May 31, 2025 at 01:35:33 PM EDT, Kevin Spellman <kevinfrsa@icloud.com> wrote:   

 Universal AI and LLM design as a regulated government responsibility would bring accountability, uniformity, standards and ethics. Social media and the algorithms that violate our digital rights only come to light when we stumble on to it. LLM’s are based on our data and we did not clearly agree to this (or at least I didn’t). There is an opacity on how they work, how and what they are connected to and more so the steps in place to mitigate bias as an example. In a field that is growing in complexity and revenue, there are fewer safeguards and people to support and enforce a standard for public and private AI handling our data. 
Please pardon the brevitySent from my iPhone
Dr. Kevin J Spellman, FRSA, CMRS

On 31 May 2025, at 16:17, Owen Ambur <owen.ambur@verizon.net> wrote:



Paola, while it might be taken as self-serving flattery or, at least, knowing your customer, ChatGPT's conclusion about the second of your two references makes sense to me:

Bottom Line

Steven J. Vaughan-Nichols is voicing a legitimate warning: if we train AIs on trash, they will produce trash. But the current reality is not that AI is collapsing—it’s that the ecosystem around it is fragile and poorly governed. The way forward isn't to abandon AI but to become more intentional and structured in how we curate knowledge, govern inputs, and manage usage.

That’s where standards like StratML, structured data, and truly responsible AI design can help avert the kind of collapse the article warns about.

The details of its argument are available here.
Owen Amburhttps://www.linkedin.com/in/owenambur/
 

    On Saturday, May 31, 2025 at 12:10:11 AM EDT, Paola Di Maio <paola.dimaio@gmail.com> wrote:   

 Good  day
I hope everyone gets a change to smell the flowers at least once a day
As predicted, we are  rapidly rolling into a new age of AI driven everything and  knowledge is all we ve got to understand what is happening and how
The changes are already impacting our individual and collective lives and behaviours etcand we won't even know (scratching head)
The best that we can do is hang onto our instruments of discernment, KR being one of them
Two articles below bring up important points
Gemini may summarize your emails even if you dont opt it for the featurehttps://techcrunch.com/2025/05/30/gemini-will-now-automatically-summarize-your-long-emails-unless-you-opt-out/
Honestly I do not know if this is true. It may even be illegal and if it depends on the geographi loation could end up being very confusingfor those who travel around a lot. How will it work, if one day a person reads an email from one country and another day from another?if someone is a Google insider enough, should be investigated imho
AI Model Collapsehttps://www.theregister.com/2025/05/27/opinion_column_ai_model_collapse/When the AI models collapse all we are going to have left is going to be the robust knowledge structure in our brain/minds and in our libraries

Brace, brace

  
  

Received on Sunday, 1 June 2025 00:33:18 UTC