Re: limitations of classification systems, fiction, lack of ontological commitment

Dear Paola, in my need to contribute albeit  in a small way. I did a little AI  search on ‘ Is there anything sits above knowledge representation for AI?’
Here is output ..

In AI, knowledge representation is a crucial aspect, but there are higher-level concepts that encompass it. Some of these include:

- *Artificial General Intelligence (AGI)*: The ultimate goal of AI research, aiming to create intelligent systems that can perform any intellectual task.
- *Cognitive Architectures*: Frameworks that model human cognition, integrating knowledge representation, reasoning, and learning.
- *Symbolic Reasoning*: Using logical representations to reason and make decisions.
- *Meta-Reasoning*: The ability of AI systems to reason about their own reasoning processes.

These areas explore how AI systems can effectively utilize and manage knowledge.
I continued asking ‘then generate hierarchical levels’.

Here's a hierarchical representation of AI concepts related to knowledge representation:

1. *Artificial General Intelligence (AGI)*
2. *Cognitive Architectures*
3. *Meta-Reasoning*
4. *Symbolic Reasoning*
5. *Knowledge Representation*
6. *Machine Learning*
7. *Data Processing*

This hierarchy reflects the progression from raw data to more abstract, intelligent processing and reasoning capabilities.
Then I requested references :

For the hierarchical levels of AI concepts, here's a general outline based on common AI research areas:

1. *Data Processing*: Basic data handling and computation.
2. *Machine Learning*: Systems learn from data.
3. *Knowledge Representation*: Structuring knowledge for AI use.
4. *Symbolic Reasoning*: Using logic for decision-making.
5. *Meta-Reasoning*: Reasoning about reasoning processes.
6. *Cognitive Architectures*: Frameworks modeling cognition.
7. *Artificial General Intelligence (AGI)*: Human-like general intelligence.

Sources:
- Russell & Norvig, "Artificial Intelligence: A Modern Approach"
- Laird, "The Soar Cognitive Architecture"
- Goertzel, "Artificial General Intelligence"

Final I asked about the ‘concept and conceptual layer’.

Concepts and conceptual layers are crucial in AI knowledge representation. Here's a possible hierarchy:

1. *Percepts*: Raw sensory data
2. *Concepts*: Abstract representations of objects, events, or ideas
3. *Categories*: Groupings of related concepts
4. *Relationships*: Connections between concepts (e.g., causal, spatial, temporal)
5. *Theories*: Complex models explaining phenomena
6. *Frameworks*: High-level structures organizing theories and concepts

Conceptual layers can include:
- *Subsymbolic*: Distributed representations (e.g., neural networks)
- *Symbolic*: Explicit, interpretable representations (e.g., logic, ontologies)
- *Conceptual Spaces*: Geometric representations of concepts (e.g., vector spaces)

These layers help organize knowledge and facilitate reasoning, learning, and abstraction in AI systems.
I am not sure where this information sits in the scheme of things. 
Kindly let me know if I am learning or wallowing in confusion .
Charles 
> On 30 Nov 2025, at 04:30, Paola Di Maio <paoladimaio10@gmail.com> wrote:
> 
> Some participants come to this list to learn about KR, and thus, about the world
> Other may come to impose their views of the world
> I only share some thoughts in the hope to inspire newcomers to the discussions to be skeptical of the reductionist views, especially
> when they are fictional
> 
> The metaphor  of ' finger pointing at the moon  may be useful to explain how maths relates to the real world
> *moon=object in the real world,  finger=pointer to an object
> 
> Lack of ontological commitment in mathematics does not reduce its usefulness.It allows mathematics to serve as a symbolic, structural, or fictional framework that organizes knowledge, supports reasoning, and aids scientific modeling, without asserting that numbers, sets, or functions exist as real entities.
> 
> Just some side notes for the record  *no problem if some participants have different views!
> 
> 1. The limitations of classification systems are well understood in science !
> All classification systems have representational limitations—structural, cultural, and epistemic constraints that prevent them from perfectly capturing the complexity of real-world subjects, and are sometimes misaligned
> Subject classification systems simplify and distort the vast complexity of knowledge. Their limitations stem from:
> 
> Structural constraints (hierarchies, reductionism)
> Cultural and historical biases
> Linguistic and epistemic factors
> 
> The ever-changing nature of knowledge
> 
> 
> 2. Ontology captures and represents 'what exists'  *
> Ontic categories describe what exist
> 
> 
> 3. MORE ON Lack of Ontological Commitment of Mathematics
> Fictionalism Mathematics is akin to a story: numbers, sets, and functions are characters or constructs in a narrative.
> 
> Statements like “2+2=4” are “true” within the story, but there is no metaphysical commitment to numbers actually existing.
> 
> Hartry Field’s Science Without Numbers demonstrates how physics can be formulated nominalistically, showing mathematics is dispensable to physical ontology.
> 
> Nominalism Mathematics is a linguistic or conceptual system, describing patterns, relations, or structures without positing entities.
> 
> Mathematical objects are seen as placeholders or names, not actual beings.
> 
>  Formalism
> 
> Mathematics consists of symbol manipulation according to rules.
> 
> Truth is internal to the formal system, not dependent on entities existing in reality.
> 
> There is no ontological claim beyond the consistency of the formal structure.
> 
> ________________________________
> 
> Implications of Lack of Ontological Commitment
> 
> Philosophical: Avoids metaphysical debates over the existence of abstract objects.
> 
> Scientific: Shows that mathematics can be used as a tool for modeling, explanation, and prediction without assuming mathematical objects exist.
> 
> Epistemic: Shifts focus from discovering “real” entities to understanding structures, patterns, and relations.
> 
> Practical: Emphasizes that mathematical work is justified by utility, coherence, and explanatory power rather than ontological truth.
> 
> 
> MORE LIMITATIONS OF CLASSIFICATION SYSTEMS
> 
> 1. Reductionism
> 
> Classification systems force complex, multifaceted subjects into predefined, discrete categories.
> 
> Real-world topics often span multiple domains.
> 
> Example: “Climate change” involves science, politics, economics, ethics—but often must be placed in one dominant category.
> 
> Limitation: Nuanced or interdisciplinary knowledge becomes oversimplified.
> 
> ________________________________
> 
> 2. Rigid Hierarchies
> 
> Most classification systems are hierarchical (trees), assuming that knowledge can be arranged from general → specific.
> But many fields do not follow clean hierarchies.
> 
> Consequences:
> 
> Relationships between subjects that are lateral, cyclical, or network-like are lost.
> 
> Some topics fit multiple parent categories but must be assigned only one.
> 
> ________________________________
> 
> 3. Cultural Bias and Eurocentrism
> 
> Many widely used systems were created in Western institutions during specific historical periods.
> Thus they often reflect:
> 
> Western cultural priorities
> 
> Colonial perspectives
> 
> Christian or Euro-American worldviews
> 
> Gendered assumptions
> 
> Examples:
> 
> Dewey Decimal once grouped non-Christian religions as a single minor section.
> 
> Indigenous knowledge systems do not map neatly onto Western categorizations.
> 
> ________________________________
> 
> 4. Static Categories in a Dynamic Knowledge Landscape
> 
> Knowledge evolves, but classification schemes update slowly.
> 
> Limitations:
> 
> Emerging fields (e.g., AI ethics, quantum biology) lack appropriate categories.
> 
> Outdated terminology persists long after it becomes obsolete.
> 
> ________________________________
> 
> 5. Ambiguity and Boundary Problems
> 
> Subjects don’t always have sharp boundaries.
> 
> “Digital humanities,” “bioinformatics,” “neuroeconomics”—these hybrid fields strain rigid category structures.
> 
> Result: Misclassification or forced placement into inadequate categories.
> 
> ________________________________
> 
> 6. Language-Based Constraints
> 
> Classification systems often depend on the language in which they were originally created.
> 
> Concepts with no direct translation get misrepresented.
> 
> Polysemous words (one term, many meanings) complicate categorization.
> 
> ________________________________
> 
> 7. Ethical and Social Framework Limitations
> 
> Some subjects carry social or moral implications the system fails to handle gracefully.
> 
> Examples:
> 
> LGBTQ+ topics historically hidden or marginalized
> 
> Mental health categories shaped by outdated frameworks
> 
> Stigmatizing terminology baked into classification labels
> 
> ________________________________
> 
> 8. Practical Space Constraints
> 
> Especially in library systems:
> 
> Only a finite number of codes or shelf spaces exist.
> 
> Broad areas get subdivided excessively; others receive disproportionately little granularity.
> 
> Outcome: Arbitrary compression or over-expansion.
> 
> ________________________________
> 
> 9. Authority and Gatekeeping
> 
> Classification presumes that experts can definitively decide how knowledge should be structured.
> 
> But:
> 
> Some knowledge systems (e.g., community knowledge or oral traditions) resist systematization.
> 
> Marginalized groups often have limited influence over classification design.
> 
> ________________________________
> 
> 10. Interoperability Problems
> 
> Different systems don’t align cleanly.
> 
> Translating between Dewey, LCC, MeSH, or scientific taxonomies can distort meaning.
> 
> Metadata loss occurs during crosswalks (mapping between classification systems).
> 
> 
> 
> 
> However, if it helps, a reminder that it is what is generally accepted,
> 
> 
> 1. maths is  type of KR
> 2. is not NL KR *which is what we use in LLM
> 
> Subsumption
> Subsumption is a key concept in knowledge representation, ontology design, and logic-based AI. It describes a “is-a” hierarchical relationship where one concept is more general and another is more specific.
> mathematics is a knowledge representation *although it may be understood or defined in other ways
> 
>  because  it provides:
> 
> Formal symbols (numbers, variables, operators)
> 
> Structured syntax (equations, functions, relations)
> 
> Precise semantics (well-defined meanings)
> 
> Inference rules (logical deduction, proof)
> 
> and much more not related to what we are discussing here
> 
> 
>  Other views may also exist, in the vast universe of discourse, that may or may not contribute to the discussions in hand.
> .
> 

Received on Wednesday, 3 December 2025 05:00:41 UTC