Re: unprecendente, hanging onto to knowledge models before AI takes them down

Patrick

glad you see what I see

* singing Nail in the coffin parody song
https://suno.com/s/tmIkDEuYEnOzKRIR

Hintoon did say KR would help explainability in the sentence after to nail
in the coffin tho

I have been trying to put forward the notion of Neurosymbolic Knowledge
Representation
in itself, a rather trivial postulate perhaps
https://figshare.com/articles/poster/A_New_Postulate_for_Knowledge_Representation_in_AI/9730268?file=17426480

academia seems to give more credibility to articles behind a paywall
https://www.taylorfrancis.com/chapters/edit/10.1201/9781003310785-9/towards-web-standard-neuro-symbolic-integration-knowledge-representation-using-model-cards-paola-di-maio

the jist should be openly accessible
https://figshare.com/articles/online_resource/Metamodel_Card_for_System_Level_Neurosymbolic_Integration/19567624

applied simplicity is the only way I know to reduce complexity

I have been chipping at this for over 5 years.
attention has been sparse....

would like to talk about this, at anyone's convenience.....

*: refrain: nail in the coffin, nail in the coffin...  repeat ad libitum*

PDM

On Sun, Jun 1, 2025 at 9:24 AM Patrick Logan <patrickdlogan@gmail.com>
wrote:

> "nail in the coffin for symbolic knowledge representation"
>
> Of course that's already been demonstrated to be false. The industry is
> desperate for so-called "neuro-symbolic" solutions. Those won't be easy
> either. I'm not holding my breath.
>
> On Sat, May 31, 2025, 5:45 PM Paola Di Maio <paoladimaio10@gmail.com>
> wrote:
>
>> What to say, Owen and Kevin
>>
>> Yesterday  a famous phrase in the Hinton Turing Award speech kept rolling
>> into my head
>> 'the nail in the coffin for symbolic knowledge representation' heralding
>> an age of non logic based machine learning
>>
>> I attach the url and transcript for those who may want to listen to it
>> again, it was such a good lecture btw,
>> Too bad it played down symbolic/logic  AI
>> Time to go back to those talks,
>> This is where the troubles started (if not earlier) and kind of feels
>> like it was a long time ago but 2018 ts just yesterday really
>>
>> I am enjoying every bit of AI, and I am also startled by its limitations
>> (abandon logic and see what you get)
>>
>> Mind out, poor and deficient reasoning is not just a prerogative of AI,
>> Humans excel and make errors. flawed conclusion and fallibility in general
>> It is when AI becomes pervasive and starts interfering with our systems
>> deleting our emails, rewriting our
>> browser history that is going to be scary, when innocent people use the
>> LLM to learn and write about a topic and do not realise
>> that what they hear is only part of the story, however well written up
>> and fast
>>
>> Again this is also true of all knowledge sources, bias is not something
>> new, it has been part of records in world history
>> But AI is now part of the interface that filters reality, and that is why
>> it can become scary
>>
>> I have also seen bias and poor reasoning in the initiatives aimed at
>> mitigating AI risks
>>
>> As long as we are aware I guess,to maintain that level of awareness in a
>> dynamic requires paying a lot of attention to what is going on
>> and that can only be done by a well tuned human brain
>>
>> https://www.annualreviews.org/content/journals/10.1146/annurev-neuro-062111-150525
>>
>> I want to also make a note of the transcript of the LLM output
>> I made a mistake in my prompt, that tried to retrieve the Turing Award
>> lecture mentioned above and wrote 2019, and the LLM hang on to the mistake
>> throughout its response instead of correcting it. I attach two transcripts
>> for reference only
>>
>>
>>
>>
>>
>> On Sun, Jun 1, 2025 at 1:35 AM Kevin Spellman <kevinfrsa@icloud.com>
>> wrote:
>>
>>> Universal AI and LLM design as a regulated government responsibility
>>> would bring accountability, uniformity, standards and ethics. Social media
>>> and the algorithms that violate our digital rights only come to light when
>>> we stumble on to it. LLM’s are based on our data and we did not clearly
>>> agree to this (or at least I didn’t). There is an opacity on how they work,
>>> how and what they are connected to and more so the steps in place to
>>> mitigate bias as an example. In a field that is growing in complexity and
>>> revenue, there are fewer safeguards and people to support and enforce a
>>> standard for public and private AI handling our data.
>>>
>>> Please pardon the brevity
>>> Sent from my iPhone
>>>
>>> *Dr. Kevin J Spellman, FRSA, CMRS*
>>>
>>> On 31 May 2025, at 16:17, Owen Ambur <owen.ambur@verizon.net> wrote:
>>>
>>> 
>>> Paola, while it might be taken as self-serving flattery or, at least,
>>> knowing your customer, ChatGPT's conclusion about the second of your two
>>> references makes sense to me:
>>>
>>> Bottom Line
>>>
>>> Steven J. Vaughan-Nichols is voicing a legitimate warning: *if we train
>>> AIs on trash, they will produce trash.* But the current reality is not
>>> that AI is collapsing—it’s that the ecosystem around it is fragile and
>>> poorly governed. The way forward isn't to abandon AI but to become more *intentional
>>> and structured* in how we curate knowledge, govern inputs, and manage
>>> usage.
>>>
>>> That’s where standards like StratML, structured data, and truly
>>> responsible AI design can help avert the kind of collapse the article warns
>>> about.
>>>
>>> The details of its argument are available here
>>> <https://chatgpt.com/share/683b1bb1-14c0-800b-9d9a-381ce0935ec8>.
>>>
>>> Owen Ambur
>>> https://www.linkedin.com/in/owenambur/
>>>
>>>
>>> On Saturday, May 31, 2025 at 12:10:11 AM EDT, Paola Di Maio <
>>> paola.dimaio@gmail.com> wrote:
>>>
>>>
>>> Good  day
>>>
>>> I hope everyone gets a change to smell the flowers at least once a day
>>>
>>> As predicted, we are  rapidly rolling into a new age of AI driven
>>> everything and  knowledge is all we ve got to understand what is happening
>>> and how
>>>
>>> The changes are already impacting our individual and collective lives
>>> and behaviours etc
>>> and we won't even know (scratching head)
>>>
>>> The best that we can do is hang onto our instruments of discernment, KR
>>> being one of them
>>>
>>> Two articles below bring up important points
>>>
>>> *Gemini may summarize your emails even if you dont opt it for the
>>> feature*
>>>
>>> https://techcrunch.com/2025/05/30/gemini-will-now-automatically-summarize-your-long-emails-unless-you-opt-out/
>>>
>>> Honestly I do not know if this is true. It may even be illegal and if it
>>> depends on the geographi loation could end up being very confusing
>>> for those who travel around a lot. How will it work, if one day a person
>>> reads an email from one country and another day from another?
>>> if someone is a Google insider enough, should be investigated imho
>>>
>>> *AI Model Collapse*
>>> https://www.theregister.com/2025/05/27/opinion_column_ai_model_collapse/
>>> When the AI models collapse all we are going to have left is going to be
>>> the robust knowledge structure in our brain/minds and in our libraries
>>>
>>>
>>> *Brace, brace*
>>>
>>>
>>>

Received on Sunday, 1 June 2025 02:21:02 UTC