Re: [ontolog-forum] RDF finally has its long awaited Generic Client!

Hi Milton,

What do you think about representation of our theoretical knowledge as
axiomatic theories?

Alex


ср, 1 окт. 2025 г. в 18:10, Milton Ponson <rwiciamsd@gmail.com>:

> As a mathematician I cannot suppress a chuckle here. The problem here is
> the implicit discussion about knowledge,  knowledge representation and
> formal knowledge representation
> These are three distinct layers and because we still not have a firm grip
> on the first, which is inextricably linked to consciousness,  knowledge
> representation remains a difficult task to accomplish, and consequently
> formal knowledge representation, which we are seeking will remain elusive.
> Large language models ignore the first layer and assume we can use token
> based systems to create knowledge representation emulation systems that can
> capture all formal knowledge representation systems.
> If one looks at the groundbreaking paper MIP*=RE,
> https://arxiv.org/abs/2001.04383, and what it states about the Connes
> embedding conjecture being false, this should ring a bell.
> Because we cannot in all cases assume that a finite matrix in a very high
> dimensional space can approximate a simulation of an infinite dimensional
> space.
> Which means that no matter how high we make the dimension and consequently
> the number of parameters used, in some cases the simulations will never
> even get close to approximate a finite accurate model of infinite space.
> Which means generative LLMs are are a mathematical dead end, and will be
> the reason why the AI bubble riding on generative LLMs will burst.
>
> Milton Ponson
> Rainbow Warriors Core Foundation
> CIAMSD Institute-ICT4D Program
> +2977459312
> PO Box 1154, Oranjestad
> Aruba, Dutch Caribbean
>
> On Wed, Oct 1, 2025, 06:53 Alex Shkotin <alex.shkotin@gmail.com> wrote:
>
>> John,
>>
>> I agree. Formalization is absolutely crucial, as we're moving toward
>> mathematical methods of knowledge processing, where the differences aren't
>> very large and mostly lie outside the realm of finite models and algorithms.
>>
>> But constructing the most accurate formalization is a rather delicate
>> matter. And here, the formal language used, while important, is only an
>> auxiliary tool. The knowledge being formalized itself must be a
>> well-structured theory. And that's quite challenging.
>>
>> Therefore, it's proposed to store theoretical knowledge, along with its
>> various formalizations, in frameworks specifically designed for knowledge
>> concentration [1]. Such theoretical repositories with an emphasis on
>> formalization exist spontaneously in Isabelle, Coq, and other provers.
>>
>> Despite the enormous accumulation of theoretical knowledge in science and
>> technology, I believe its volume, in a systematic and refined form, would
>> be several terabytes.
>>
>> The key is to create concentrators of such verified and formalized
>> theoretical knowledge.
>>
>> Alex
>>
>>
>> [1] (PDF) Theory framework - knowledge hub message #1
>> <https://www.researchgate.net/publication/374265191_Theory_framework_-_knowledge_hub_message_1>
>> рус
>> <https://www.researchgate.net/publication/374233866_Karkas_teorii_-_koncentrator_znanij_soobsenie_No1>
>>
>> "Storing the theory of a particular subject area in one place and
>> maintaining it (including formalization) through collective efforts is
>> easily possible with the modern development of technology. The
>> concentration and verification of knowledge achieved in this case should
>> give a powerful ordering of theoretical knowledge, which will facilitate
>> their formalization, i.e. mathematical notation, and therefore algorithmic
>> processing in many cases, up to the semi-automatic proof of various kinds
>> of consequences, for example, theorems. This message describes what the
>> framework of the theory is, intended for unified storage and collective
>> accumulation of its results."
>>
>>
>> вт, 30 сент. 2025 г. в 23:02, John F Sowa <sowa@bestweb.net>:
>>
>>> Alex,
>>>
>>> Wolfram and others make an important check to avoid those errors.
>>>
>>> Wolfram translates questions or commands in ordinary English to their
>>> precise formal notation.  Then  before they execute the formal version,
>>> they translate it back to a precise statement in Controlled English.
>>>
>>> The CE text looks like English, and it can be read as English.    But it
>>> has a precise, formally defined translation to and from Wolfram's formal
>>> notation.
>>>
>>> Many systems, including our Permion Inc. systems do that.  They either
>>> provide an exactly correct answer, or they carry on a dialog to help the
>>> human user specify a request that can be processed by exact formal methods.
>>>
>>> The final answer is exactly correct reply to the formally defined
>>> Controlled English.
>>>
>>> Errors are still possible, but they are the fault of the human user who
>>> may not understand the CE reply.  That can be corrected by giving the users
>>> more options for asking further questions before making a commitment to one
>>> particular answer.
>>>
>>> John
>>>
>>>
>>> ------------------------------
>>> *From*: "Alex Shkotin" <alex.shkotin@gmail.com>
>>>
>>> Hi Kingsley,
>>>
>>> A good article about using RDF and user interface functionality. But I
>>> believe that any information generated by LLM should be marked "May contain
>>> errors."
>>>
>>> So all those beautiful tables, diagrams, and documents should display
>>> this sign prominently.
>>>
>>> For me, user interface functionality that reflects the power of RDF is
>>> more important.
>>>
>>> Best regards,
>>>
>>> Alex
>>>
>>> пн, 29 сент. 2025 г. в 19:48, 'Kingsley Idehen' via ontolog-forum <
>>> ontolog-forum@googlegroups.com>:
>>>
>>> Hi Everyone,
>>>
>>> It’s been a while!
>>>
>>> Something important is happening right now, thanks to the emergence of
>>> LLMs as the long-awaited generic RDF client (the so-called “killer app”).
>>> We all know how Mosaic → Mozilla/Netscape made HTML and HTTP globally
>>> usable by end-users and developers alike. Well, the very same thing is
>>> finally happening with RDF—albeit some 20+ years later than expected.
>>>
>>> Here’s a post I recently published on LinkedIn about this critical
>>> development:
>>>
>>>
>>> https://www.linkedin.com/pulse/large-language-models-llms-powerful-generic-rdf-clients-idehen-xwhfe
>>>
>>> --
>>> Regards,
>>>
>>> Kingsley Idehen 
>>> Founder & CEO
>>> OpenLink Software
>>> Home Page: http://www.openlinksw.com
>>> Community Support: https://community.openlinksw.com
>>>
>>> Social Media:
>>> LinkedIn: http://www.linkedin.com/in/kidehen
>>> Twitter : https://twitter.com/kidehen
>>>
>>> --
>>> All contributions to this forum are covered by an open-source license.
>>> For information about the wiki, the license, and how to subscribe or
>>> unsubscribe to the forum, see http://ontologforum.org/info
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "ontolog-forum" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to ontolog-forum+unsubscribe@googlegroups.com.
>>> To view this discussion visit
>>> https://groups.google.com/d/msgid/ontolog-forum/c6fff330733e409ab403d68c30a52e46%409d7e08195564407192034ca99241e3fa
>>> <https://groups.google.com/d/msgid/ontolog-forum/c6fff330733e409ab403d68c30a52e46%409d7e08195564407192034ca99241e3fa?utm_medium=email&utm_source=footer>
>>> .
>>>
>>

Received on Wednesday, 1 October 2025 17:01:26 UTC