Re: Knowledge representation for AI using domains or a universe of discourse

[pruning back to AIKR]


> On 28 Jun 2025, at 00:14, Milton Ponson <rwiciamsd@gmail.com> wrote:
> 
> OK, here is a more precise description of what I am looking for in knowledge representation. 
> 
> First let's look at the Wikipedia page:
> https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning

That page fails to adequately explain the relationship between information and knowledge.

We could perhaps define information as structured data, and knowledge as models of information that describes regularities and enables reasoning about information. Gemini describes reasoning as the human cognitive process of analysing, synthesising, interpreting, and applying that transforms information into actionable knowledge.

The success of Generative AI has shown that human knowledge is amenable to processing at a huge scale and subtlety. However, LLMs use opaque representations, just as is the case for our brains. We can thus distinguish transparency in explaining reasoning from the mechanisms that enable reasoning at scale.

> My dealing with knowledge in the vast domain of sustainable development which interacts with all academic fields, has convinced me that knowledge exists in context.

Indeed, and this is where neural networks shine at managing the complex dependencies involved in rich statistical models of knowledge and context.  Explicit symbolic knowledge is a crude approximation, but nonetheless invaluable as a basis for explanations. Some models, e.g. quantum electrodynamics, provide incredibly accurate predictions, but such theories are the exception rather than the rule, even in physics.

Explicit knowledge is key to explanations, but not to reasoning.

Dave Raggett <dsr@w3.org>

Received on Saturday, 28 June 2025 17:16:18 UTC