- From: CHARLES WAWERU <cmukabi@aol.com>
- Date: Thu, 4 Dec 2025 09:23:13 +0300
- To: Paola Di Maio <paoladimaio10@gmail.com>
- Cc: W3C AIKR CG <public-aikr@w3.org>
- Message-Id: <3BF7DF6F-4B3C-463B-8E6D-6492E1FCFE2B@aol.com>
Thanks Paola, Daniel I will check all the links provided. Following Paola’s advise to evaluate using different model, I engaged Google Studio Gemin 3 pro with the same prompt. Here is the output.. ... In the standard hierarchy of information science and Artificial Intelligence, the layer that sits immediately above Knowledge Representation is Wisdom.[1 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQFW3LI7PXMK3Ijj1jBXOy0QEuRS1pdKkICVLfq31HD6pDMd6gnYRyj7mgalrxKtxhReOoyzp4pK859CPWdZid3vg9zTCKR3xiksfcvrAQwcMHysgKIQxQW75iin_7_-GBBr7ywSzSfUxPTlHDFIJtNMoz84aQ%3D%3D>] However, depending on whether you are looking at a theoretical model, a software architecture, or a philosophical goal, the answer varies. Here is a breakdown of what sits "above" knowledge in different contexts. 1. The Information Hierarchy (DIKW Pyramid) The most common model used to describe this hierarchy is the DIKW Pyramid.[2 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQGZfQINbcmApEU7VxpRuK-TfqVXOPQzAG3RGts9Fn5Up_4t3p-hfEHnvdoW8tcfAwDTvMXImT-nYdkWw3Sy4COHaA5EAnlnl3mMYiyeJgJWRKMq7aa61Mx_pIYaqYGV__wauyJrLuy6PsUODpu5saYnbfGvytxbcFhFba1pAqffCSQSisJoRm0%3D>][3 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQE2HTpsxBraDYDWpviqpY9CCk2iLMT1B73vxT6YwvZdwjbqe3Ra74uIfYH-Ul9Ug0VvMNXUHfTQdndVk6tyqGdxS6B9IbvI_8sQve-TxDPGwAogct37tP5vEgKG4ATkBDmEXZKPMcG8nxtAPoRwgi4Eg-FXBcZCdenBMA%3D%3D>] Data: Raw numbers or symbols (e.g., "38"). Information: Data with context (e.g., "38 degrees Celsius"). Knowledge: Information with meaning, patterns, or rules (e.g., "38°C is a fever; humans typically have a temperature of 37°C"). Wisdom: The application of knowledge to make sound judgments, particularly regarding the future or ethics (e.g., "I should give this patient medication to lower the fever, but not so much that it masks other symptoms"). In this context, Wisdom is the "top" layer.[1 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQFW3LI7PXMK3Ijj1jBXOy0QEuRS1pdKkICVLfq31HD6pDMd6gnYRyj7mgalrxKtxhReOoyzp4pK859CPWdZid3vg9zTCKR3xiksfcvrAQwcMHysgKIQxQW75iin_7_-GBBr7ywSzSfUxPTlHDFIJtNMoz84aQ%3D%3D>][3 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQE2HTpsxBraDYDWpviqpY9CCk2iLMT1B73vxT6YwvZdwjbqe3Ra74uIfYH-Ul9Ug0VvMNXUHfTQdndVk6tyqGdxS6B9IbvI_8sQve-TxDPGwAogct37tP5vEgKG4ATkBDmEXZKPMcG8nxtAPoRwgi4Eg-FXBcZCdenBMA%3D%3D>] It implies ethical judgment, experience, and the ability to predict consequences in novel situations. 2. The Functional Architecture (How AI Works) In practical AI engineering, "Knowledge Representation" (KR) is a static layer—it is just the library of facts. What sits above it is the mechanism that uses that library. Reasoning / Inference: This is the engine that sits on top of knowledge.[4 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQFs8YN-8_CR-SXGKjsiIwQGfhXVvGGjqg-BKwrGGQNxMQsgf6AVZNi0eja4ROqxTDhEGVwTmkneEQ5TY0RWSeGpeU5SnQi3-ADL_gwukQqrxdqYJEnX4vtMaWs9nBVf1C_5IgWB2zwmlsYuKFPEzAZbt2NOkoJMPu1XcWxlIHz1mV9TyJ5U5GbI3WWh_wXauCZR2XzWu08PrkXfpRyNttrw9F2JixJvGQ_ilrTURHqIQzq5>][5 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQFTRj8Bf0bZitOPk0f9DAqESyr1R_C-ZsRKwuhbAEneFdx-uzmSUl3amto7vhRcWng2yI-dgPLx9I_efxWvaXbn6vk2FRl4w_k7Xt3veH6l1LL5XO49p0kwGZaO3jOOOnyz94CkkP7Ee0XTr4qs7uHRzMTbKnZXJ6_XwRzHGF0WpRkAi3tsC5IlnOyzo1iWdtpz9ozx6t8NLw%3D%3D>] If KR is the map, Reasoning is the navigation system. It uses the facts to derive new conclusions that aren't explicitly stored.[5 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQFTRj8Bf0bZitOPk0f9DAqESyr1R_C-ZsRKwuhbAEneFdx-uzmSUl3amto7vhRcWng2yI-dgPLx9I_efxWvaXbn6vk2FRl4w_k7Xt3veH6l1LL5XO49p0kwGZaO3jOOOnyz94CkkP7Ee0XTr4qs7uHRzMTbKnZXJ6_XwRzHGF0WpRkAi3tsC5IlnOyzo1iWdtpz9ozx6t8NLw%3D%3D>] Planning: The ability to use knowledge to construct a sequence of actions to achieve a goal. Meta-Cognition: This is a higher-order layer where the AI "thinks about its own thinking."[6 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQGDdH4H8H4WR8G3rWB1PO9MB8FCi04j-klecfvzDI_nlGFJZINRFfWXqiurOJOnEYBhdbtaKEWJPrJnUTCMk9mRgkbC7ecMVvZQ2iEpVdDTSNiyDWzGJNWzlafwcE-x5RZiWFY8oXvH23U%3D>] It monitors its own reasoning processes to see if they are efficient or if it lacks the knowledge to solve a problem. 3. The "Missing Link" (Philosophy & Future AI) Researchers currently working on Artificial General Intelligence (AGI) often argue that what sits above knowledge is Understanding or Meaning (Semantics). The Grounding Problem: Current AI can represent the concept of an "apple" (it knows it is red, a fruit, round), but it has no physical experience of an apple. It lacks the "grounding" that connects the abstract representation to physical reality. Consciousness: The theoretical ultimate layer where the system is not just processing representations but is aware of them. Summary Conceptually: It is Wisdom (knowing why and what is best). Mechanically: It is Reasoning (the process of using the knowledge).[4 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQFs8YN-8_CR-SXGKjsiIwQGfhXVvGGjqg-BKwrGGQNxMQsgf6AVZNi0eja4ROqxTDhEGVwTmkneEQ5TY0RWSeGpeU5SnQi3-ADL_gwukQqrxdqYJEnX4vtMaWs9nBVf1C_5IgWB2zwmlsYuKFPEzAZbt2NOkoJMPu1XcWxlIHz1mV9TyJ5U5GbI3WWh_wXauCZR2XzWu08PrkXfpRyNttrw9F2JixJvGQ_ilrTURHqIQzq5>][5 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQFTRj8Bf0bZitOPk0f9DAqESyr1R_C-ZsRKwuhbAEneFdx-uzmSUl3amto7vhRcWng2yI-dgPLx9I_efxWvaXbn6vk2FRl4w_k7Xt3veH6l1LL5XO49p0kwGZaO3jOOOnyz94CkkP7Ee0XTr4qs7uHRzMTbKnZXJ6_XwRzHGF0WpRkAi3tsC5IlnOyzo1iWdtpz9ozx6t8NLw%3D%3D>] Architecturally: It is Meta-Knowledge (knowing what you know and managing it). Sources help medium.com <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQFW3LI7PXMK3Ijj1jBXOy0QEuRS1pdKkICVLfq31HD6pDMd6gnYRyj7mgalrxKtxhReOoyzp4pK859CPWdZid3vg9zTCKR3xiksfcvrAQwcMHysgKIQxQW75iin_7_-GBBr7ywSzSfUxPTlHDFIJtNMoz84aQ%3D%3D> funblocks.net <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQGZfQINbcmApEU7VxpRuK-TfqVXOPQzAG3RGts9Fn5Up_4t3p-hfEHnvdoW8tcfAwDTvMXImT-nYdkWw3Sy4COHaA5EAnlnl3mMYiyeJgJWRKMq7aa61Mx_pIYaqYGV__wauyJrLuy6PsUODpu5saYnbfGvytxbcFhFba1pAqffCSQSisJoRm0%3D> ontotext.com <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQE2HTpsxBraDYDWpviqpY9CCk2iLMT1B73vxT6YwvZdwjbqe3Ra74uIfYH-Ul9Ug0VvMNXUHfTQdndVk6tyqGdxS6B9IbvI_8sQve-TxDPGwAogct37tP5vEgKG4ATkBDmEXZKPMcG8nxtAPoRwgi4Eg-FXBcZCdenBMA%3D%3D> stackexchange.com <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQFs8YN-8_CR-SXGKjsiIwQGfhXVvGGjqg-BKwrGGQNxMQsgf6AVZNi0eja4ROqxTDhEGVwTmkneEQ5TY0RWSeGpeU5SnQi3-ADL_gwukQqrxdqYJEnX4vtMaWs9nBVf1C_5IgWB2zwmlsYuKFPEzAZbt2NOkoJMPu1XcWxlIHz1mV9TyJ5U5GbI3WWh_wXauCZR2XzWu08PrkXfpRyNttrw9F2JixJvGQ_ilrTURHqIQzq5> aiexplainedhere.com <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQFTRj8Bf0bZitOPk0f9DAqESyr1R_C-ZsRKwuhbAEneFdx-uzmSUl3amto7vhRcWng2yI-dgPLx9I_efxWvaXbn6vk2FRl4w_k7Xt3veH6l1LL5XO49p0kwGZaO3jOOOnyz94CkkP7Ee0XTr4qs7uHRzMTbKnZXJ6_XwRzHGF0WpRkAi3tsC5IlnOyzo1iWdtpz9ozx6t8NLw%3D%3D> yujenlin.com <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQGDdH4H8H4WR8G3rWB1PO9MB8FCi04j-klecfvzDI_nlGFJZINRFfWXqiurOJOnEYBhdbtaKEWJPrJnUTCMk9mRgkbC7ecMVvZQ2iEpVdDTSNiyDWzGJNWzlafwcE-x5RZiWFY8oXvH23U%3D> Google Search Suggestions Display of Search Suggestions is required when using Grounding with Google Search. Learn more <https://ai.google.dev/gemini-api/docs/grounding/search-suggestions> DIKW pyramid AI knowledge representation <https://www.google.com/search?q=DIKW%20pyramid%20AI%20knowledge%20representation&client=app-vertex-grounding&safesearch=active> knowledge representation vs reasoning vs wisdom in AI <https://www.google.com/search?q=knowledge%20representation%20vs%20reasoning%20vs%20wisdom%20in%20AI&client=app-vertex-grounding&safesearch=active> what sits above knowledge representation in AI <https://www.google.com/search?q=what%20sits%20above%20knowledge%20representation%20in%20AI&client=app-vertex-grounding&safesearch=active> cognitive architectures layers above knowledge representation <https://www.google.com/search?q=cognitive%20architectures%20layers%20above%20knowledge%20representation&client=app-vertex-grounding&safesearch=active> meta-knowledge and higher order logic in AI <https://www.google.com/search?q=meta-knowledge%20and%20higher%20order%20logic%20in%20AI&client=app-vertex-grounding&safesearch=active> About hierarchical layer…. There isn't a single universal "top" layer because the answer depends on which hierarchy you are climbing. AI, we generally look at this through three different lenses: Information Value, Cognitive Function, and System Architecture. Here are the hierarchical levels for each perspective, moving from the bottom (Knowledge Representation) to what sits above it. 1. The Information Hierarchy (The "Value" Ladder) This is the most common model (DIKW), describing how raw input becomes useful. In this model, Wisdom sits above knowledge. Level 5: Wisdom (The Top) Definition: The ability to make sound judgments and decisions based on knowledge, specifically dealing with the future, ethics, and "what is best." AI Goal: Automated decision-making that considers long-term consequences, ethics, and nuance (e.g., a medical AI deciding not to administer a treatment because of quality-of-life concerns, even if the data suggests it cures the disease). Level 4: Knowledge (The KR Layer) Definition: Structured information with logical relationships (e.g., "A fever is a temperature > 38°C"). Level 3: Information Definition: Data with context (e.g., "Patient temperature: 39°C"). Level 2: Data Definition: Raw symbols (e.g., "39"). 2. The Functional Hierarchy (The "Doing" Ladder) In practical AI engineering, Knowledge Representation (KR) is just a static library. The "engine" that uses that library sits above it. Level 4: Meta-Cognition (The Top) Definition: "Thinking about thinking." The system monitors its own reasoning process to see if it is working efficiently or if it lacks the knowledge to solve the problem. AI Goal: An agent that realizes, "I don't have enough data to answer this, so I need to ask a clarifying question" rather than hallucinating an answer. Level 3: Planning & Reasoning Definition: The active process of using knowledge to construct a path to a goal.[1 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQHg_q0nXIggL9KQX1NVVwwW3DletkThi87e7a0MJXZ6o0RRUbEUXic0mv2R-tLt8artMuhrWxdOoLVn4gkyprz2EfbM_mCDHMymcr-9rnfY2HTij7ygdv1BnfrnpRR9eIaFALkzk-lsmD2Sz_y0nUMKY12cY4e6Fp93hN3Zkoe_vICTRdzAL70UqBNDm0I8cL32t3ok>][2 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQEK1Nd3rZ-RqbenX5uNRnZecB1TOZW6lxXqB9pwcyEivufHY-47hkIDxSy0nvn8X7EJP0c88bZfqNFAvQdqOfCQ2bZd_2UsCHA4LIENyPceDzx-LraMAK94Z8x2_ZtAMiUEDQrxHdGK>] If KR is the map, this is the GPS routing algorithm. Level 2: Knowledge Representation Definition: The stored facts and rules (The Knowledge Graph or Ontology). Level 1: Perception Definition: Turning sensory inputs (pixels, audio) into internal symbols. 3. The Theoretical Hierarchy (Newell’s "Knowledge Level") In classical computer science theory (specifically Allen Newell’s Knowledge Level Hypothesis), there is a strict abstraction layer above symbols. Level 2: The Knowledge Level Definition: A level where we describe the system only by its goals and what it knows, ignoring how it is implemented.[2 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQEK1Nd3rZ-RqbenX5uNRnZecB1TOZW6lxXqB9pwcyEivufHY-47hkIDxSy0nvn8X7EJP0c88bZfqNFAvQdqOfCQ2bZd_2UsCHA4LIENyPceDzx-LraMAK94Z8x2_ZtAMiUEDQrxHdGK>] At this level, the "law of behavior" is Rationality (the agent will always act to achieve its goals given its knowledge).[1 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQHg_q0nXIggL9KQX1NVVwwW3DletkThi87e7a0MJXZ6o0RRUbEUXic0mv2R-tLt8artMuhrWxdOoLVn4gkyprz2EfbM_mCDHMymcr-9rnfY2HTij7ygdv1BnfrnpRR9eIaFALkzk-lsmD2Sz_y0nUMKY12cY4e6Fp93hN3Zkoe_vICTRdzAL70UqBNDm0I8cL32t3ok>][2 <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQEK1Nd3rZ-RqbenX5uNRnZecB1TOZW6lxXqB9pwcyEivufHY-47hkIDxSy0nvn8X7EJP0c88bZfqNFAvQdqOfCQ2bZd_2UsCHA4LIENyPceDzx-LraMAK94Z8x2_ZtAMiUEDQrxHdGK>] Level 1: The Symbol Level Definition: The actual code, logic gates, and knowledge graphs that implement the intelligence. 4. The Structural Hierarchy (The Ontology Stack) If you are looking strictly at the structure of the data itself (Ontology), there is a hierarchy of abstraction. Level 3: Upper Ontology (Foundation Ontology) Definition: The philosophical "top" that defines the most basic categories of reality (e.g., Time, Space, Object, Process). Examples include BFO (Basic Formal Ontology) or DOLCE. Level 2: Domain Ontology Definition: Concepts specific to a field (e.g., "Medical Procedure," "Aircraft Engine"). Level 1: Instance Data Definition: Specific objects (e.g., "Dr. Smith," "Engine #405"). Summary: What sits directly above KR? Functionally: Reasoning (The engine that runs on the fuel of knowledge). Philosophically: Wisdom (The application of knowledge for good judgment). Architecturally: Meta-Cognition (The supervisor that manages the knowledge). Sources help ucf.edu <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQHg_q0nXIggL9KQX1NVVwwW3DletkThi87e7a0MJXZ6o0RRUbEUXic0mv2R-tLt8artMuhrWxdOoLVn4gkyprz2EfbM_mCDHMymcr-9rnfY2HTij7ygdv1BnfrnpRR9eIaFALkzk-lsmD2Sz_y0nUMKY12cY4e6Fp93hN3Zkoe_vICTRdzAL70UqBNDm0I8cL32t3ok> wikipedia.org <https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQEK1Nd3rZ-RqbenX5uNRnZecB1TOZW6lxXqB9pwcyEivufHY-47hkIDxSy0nvn8X7EJP0c88bZfqNFAvQdqOfCQ2bZd_2UsCHA4LIENyPceDzx-LraMAK94Z8x2_ZtAMiUEDQrxHdGK> Google Search Suggestions Display of Search Suggestions is required when using Grounding with Google Search. Learn more <https://ai.google.dev/gemini-api/docs/grounding/search-suggestions> levels of abstraction in AI knowledge representation <https://www.google.com/search?q=levels%20of%20abstraction%20in%20AI%20knowledge%20representation&client=app-vertex-grounding&safesearch=active> DIKW pyramid extended versions <https://www.google.com/search?q=DIKW%20pyramid%20extended%20versions&client=app-vertex-grounding&safesearch=active> what is above ontologies in AI hierarchy <https://www.google.com/search?q=what%20is%20above%20ontologies%20in%20AI%20hierarchy&client=app-vertex-grounding&safesearch=active> hierarchical levels of AI cognition beyond knowledge representation <https://www.google.com/search?q=hierarchical%20levels%20of%20AI%20cognition%20beyond%20knowledge%20representation&client=app-vertex-grounding&safesearch=active> cognitive architecture hierarchy layers knowledge reasoning <https://www.google.com/search?q=cognitive%20architecture%20hierarchy%20layers%20knowledge%20reasoning&client=app-vertex-grounding&safesearch=active> Newell knowledge level hypothesis hierarchy above symbol level <https://www.google.com/search?q=Newell%20knowledge%20level%20hypothesis%20hierarchy%20above%20symbol%20level&client=app-vertex-grounding&safesearch=active> Brachman knowledge representation levels hierarchy <https://www.google.com/search?q=Brachman%20knowledge%20representation%20levels%20hierarchy&client=app-vertex-grounding&safesearch=active> levels of knowledge representation implementation logical epistemological conceptual linguistic <https://www.google.com/search?q=levels%20of%20knowledge%20representation%20implementation%20logical%20epistemological%20conceptual%20linguistic&client=app-vertex-grounding&safesearch=active> While asked to provide differences and similarities from my previous output (Meta AI), here is the output… This comparison is fascinating because the hierarchy you provided represents a Systems Engineering & Capability Stack (how to build a brain), whereas the previous models (like DIKW or Conceptual Spaces) were Information Abstraction Stacks (how data becomes meaning). Here is the breakdown of the similarities and critical differences between your provided hierarchy and the "Conceptual Layer" models discussed previously. 1. The Core Similarity: The "Reasoning over Knowledge" Block In both your list and the standard Functional Hierarchy, the relationship between Level 4 and Level 5 is identical. Your List: Symbolic Reasoning (4) sits on top of Knowledge Representation (5). Previous Model: Reasoning/Inference sits on top of KR. The Consensus: Both models agree that Knowledge (the facts) is useless without Reasoning (the logic engine). The "Know-How" (Reasoning) must always sit above the "Know-What" (KR). 2. Major Difference: The Location of "Concepts" This is the most significant conceptual divergence. Previous Model (Conceptual Spaces): Argued for a distinct Conceptual Layer (Geometric/Prototype) that sits between Neural Networks (Data) and Symbolic Logic (KR). It handles "fuzziness" and similarity. Your List: There is no explicit "Concept" layer. Instead, it is likely split: Machine Learning (6): Handles the "fuzziness" and pattern recognition (the bottom half of concepts). Knowledge Representation (5): Handles the definitions (the top half of concepts). The Implication: Your hierarchy suggests a Neuro-Symbolic approach where Machine Learning (sub-symbolic) feeds directly into Knowledge Representation (symbolic), potentially skipping the "Geometric/Spatial" layer that Gärdenfors argues is necessary for true understanding. 3. Structural Difference: Machine Learning vs. The "Sub-Symbolic" Your List: Places Machine Learning (6) explicitly below Knowledge Representation (5). This implies a Bottom-Up architecture: the AI learns patterns (ML) and crystallizes them into rules (KR). Previous Model: Often treats ML and KR as parallel approaches. However, in the "Conceptual Spaces" model, the bottom layer is "Sub-symbolic" (neural connections). Comparison: Your list is more practical for modern AI engineering. It acknowledges that in modern systems, ML is the process used to generate the Data Processing required to fill the Knowledge Representation. 4. Top-Level Difference: AGI vs. Wisdom Your List (Level 1): Top is AGI (Artificial General Intelligence). Previous Model (DIKW): Top is Wisdom. The Distinction: AGI is a measure of Capability (Can it do everything a human can do?). Wisdom is a measure of Value (Did it make the right ethical choice?). Synthesis: An AGI (Level 1) is the system required to generate Wisdom. 5. The "Container" Difference: Cognitive Architectures Your List: Cognitive Architectures (2) is a distinct level below AGI but above Reasoning. Previous Model: Usually views "Cognitive Architecture" as the entire bucket containing levels 3 through 7. Why your list is unique: By placing it at Level 2, your list suggests that "Cognitive Architecture" is the Integration Layer. It is the "glue" that connects Meta-Reasoning, Symbolic Reasoning, and KR into a unified mind. This is a very sophisticated view—it implies that simply having reasoning (4) isn't enough; you need a system to orchestrate that reasoning (2). Summary Visualization If we map your Capability Hierarchy against the "Meaning" Hierarchy, it looks like this: Your Hierarchy (Capability) Conceptual Hierarchy (Meaning) Alignment Notes 1. AGI Wisdom / Consciousness The ultimate goal. 2. Cognitive Architectures The Mind / The Agent The system combining all parts. 3. Meta-Reasoning Metacognition Perfect Match: Thinking about thinking. 4. Symbolic Reasoning Logic / Syntax Perfect Match: Manipulating symbols. 5. Knowledge Representation Semantics / Concepts Difference: Your KR is likely rigid logic. The "Concept" layer here is often broader (prototypes). 6. Machine Learning Conceptual Spaces Overlap: ML approximates the "geometric" relationships between data. 7. Data Processing Perception / Data Perfect Match: Raw inputs. Conclusion Your hierarchy is a blueprint for building a brain (Engineering view). The previous discussion was a blueprint for understanding meaning (Philosophical/Psychological view). The "Conceptual Layer" is the hidden glue between your Level 5 (KR) and Level 6 (ML). It is the translation layer where statistical probabilities (ML) become distinct facts (KR). Now the conclusion here confuses me if I have contributed anything to the task at hand or introduced different topic, over to you Paula, Daniel and the rest . Many thanks Charles. > On 4 Dec 2025, at 08:32, CHARLES WAWERU <cmukabi@aol.com> wrote: > > Thanks Paola, Daniel > Sent from my iPhone > http://share.payoneer.com/nav/QVCAMdwwLEzTHFv0LgtPy4MhJQ3AGwSli1rtz8A95x9tBafV8puBAroUEusyMVSg32hIAvEKvhibLKoacJc_Og2 <http://share.payoneer.com/nav/QVCAMdwwLEzTHFv0LgtPy4MhJQ3AGwSli1rtz8A95x9tBafV8puBAroUEusyMVSg32hIAvEKvhibLKoacJc_Og2> > >> On 3 Dec 2025, at 09:46, Paola Di Maio <paoladimaio10@gmail.com> wrote: >> >> >> Charles, >> >> How great that you are inspired by this topic, and started diggin in. Yes, you are definitely learning, and so is everyone >> >> Great questions, I wish I had the time to give an appropriate response - but I do not >> Let me hurry to reply briefly before this thread gets buried into the tasklist >> *I look forward to see how this develops but may not be able to follow up >> >> >> KR is about conceptual alignment and adherence to a worldview/model of your choice. >> in LLM what gets out of the model is asserted in the prompt. >> >> So I am not sure what prompted these answers, without the prompt we cannot understand and evaluate >> the output.Sounds to me from the answer that the prompt injected bias. >> You may also want to evaluate the output from different models then come back to tell us >> And rephrase your prompts >> >> Also the output you get seems to prove that without ontological commitment, LLM outcomes may be a fantasy >> >> >> I take this opportunity to remember QUINE >> He is the one who brought up the notion of ontological commitment in the first place >> Quine, W. V. (1948). On What There Is. Review of Metaphysics, 2(5), 21–38. >> >> There are several passages in Quine's seminal paper 'on what there's' that I would like to cite as relevant to KR today >> bu. I ll leave it to other participants to flesh out what is relevant to them >> >> I just report here the starting and concluding paragraphs >> >> Paper begins with >> A curious thing about the ontological problem is its simplicity. It can be put in three >> Anglo-Saxon monosyllables: ‘What is there?’ It can be answered, moreover, in a >> word— ‘Everything’—and everyone will accept this answer as true. However, this is >> merely to say that there is what there is. There remains room for disagreement over >> cases; and so the issue has stayed alive down the centuries. >> Ends with >> From among the various conceptual schemes best suited to these various pursuits, >> one—the phenomenalistic—claims epistemological priority. Viewed from within the >> phenomenalistic conceptual scheme, the ontologies of physical objects and >> mathematical objects are myths. The quality of myth, however, is relative; relative, in >> this case, to the epistemological point of view. This point of view is one among various, >> corresponding to one among our various interests and purposes >> >> https://rintintin.colorado.edu/~vancecd/phil375/Quine.pdf <https://rintintin.colorado.edu/~vancecd/phil375/Quine.pdf> >> >> >> >> >> On Wed, Dec 3, 2025 at 1:00 PM CHARLES WAWERU <cmukabi@aol.com <mailto:cmukabi@aol.com>> wrote: >> Dear Paola, in my need to contribute albeit in a small way. I did a little AI search on ‘ Is there anything sits above knowledge representation for AI?’ >> Here is output .. >> >> In AI, knowledge representation is a crucial aspect, but there are higher-level concepts that encompass it. Some of these include: >> >> - *Artificial General Intelligence (AGI)*: The ultimate goal of AI research, aiming to create intelligent systems that can perform any intellectual task. >> - *Cognitive Architectures*: Frameworks that model human cognition, integrating knowledge representation, reasoning, and learning. >> - *Symbolic Reasoning*: Using logical representations to reason and make decisions. >> - *Meta-Reasoning*: The ability of AI systems to reason about their own reasoning processes. >> >> These areas explore how AI systems can effectively utilize and manage knowledge. >> I continued asking ‘then generate hierarchical levels’. >> >> Here's a hierarchical representation of AI concepts related to knowledge representation: >> >> 1. *Artificial General Intelligence (AGI)* >> 2. *Cognitive Architectures* >> 3. *Meta-Reasoning* >> 4. *Symbolic Reasoning* >> 5. *Knowledge Representation* >> 6. *Machine Learning* >> 7. *Data Processing* >> >> This hierarchy reflects the progression from raw data to more abstract, intelligent processing and reasoning capabilities. >> Then I requested references : >> >> For the hierarchical levels of AI concepts, here's a general outline based on common AI research areas: >> >> 1. *Data Processing*: Basic data handling and computation. >> 2. *Machine Learning*: Systems learn from data. >> 3. *Knowledge Representation*: Structuring knowledge for AI use. >> 4. *Symbolic Reasoning*: Using logic for decision-making. >> 5. *Meta-Reasoning*: Reasoning about reasoning processes. >> 6. *Cognitive Architectures*: Frameworks modeling cognition. >> 7. *Artificial General Intelligence (AGI)*: Human-like general intelligence. >> >> Sources: >> - Russell & Norvig, "Artificial Intelligence: A Modern Approach" >> - Laird, "The Soar Cognitive Architecture" >> - Goertzel, "Artificial General Intelligence" >> >> Final I asked about the ‘concept and conceptual layer’. >> >> Concepts and conceptual layers are crucial in AI knowledge representation. Here's a possible hierarchy: >> >> 1. *Percepts*: Raw sensory data >> 2. *Concepts*: Abstract representations of objects, events, or ideas >> 3. *Categories*: Groupings of related concepts >> 4. *Relationships*: Connections between concepts (e.g., causal, spatial, temporal) >> 5. *Theories*: Complex models explaining phenomena >> 6. *Frameworks*: High-level structures organizing theories and concepts >> >> Conceptual layers can include: >> - *Subsymbolic*: Distributed representations (e.g., neural networks) >> - *Symbolic*: Explicit, interpretable representations (e.g., logic, ontologies) >> - *Conceptual Spaces*: Geometric representations of concepts (e.g., vector spaces) >> >> These layers help organize knowledge and facilitate reasoning, learning, and abstraction in AI systems. >> I am not sure where this information sits in the scheme of things. >> Kindly let me know if I am learning or wallowing in confusion . >> Charles >>> On 30 Nov 2025, at 04:30, Paola Di Maio <paoladimaio10@gmail.com <mailto:paoladimaio10@gmail.com>> wrote: >>> >>> Some participants come to this list to learn about KR, and thus, about the world >>> Other may come to impose their views of the world >>> I only share some thoughts in the hope to inspire newcomers to the discussions to be skeptical of the reductionist views, especially >>> when they are fictional >>> >>> The metaphor of ' finger pointing at the moon may be useful to explain how maths relates to the real world >>> *moon=object in the real world, finger=pointer to an object >>> >>> Lack of ontological commitment in mathematics does not reduce its usefulness.It allows mathematics to serve as a symbolic, structural, or fictional framework that organizes knowledge, supports reasoning, and aids scientific modeling, without asserting that numbers, sets, or functions exist as real entities. >>> >>> Just some side notes for the record *no problem if some participants have different views! >>> >>> 1. The limitations of classification systems are well understood in science ! >>> All classification systems have representational limitations—structural, cultural, and epistemic constraints that prevent them from perfectly capturing the complexity of real-world subjects, and are sometimes misaligned >>> Subject classification systems simplify and distort the vast complexity of knowledge. Their limitations stem from: >>> >>> Structural constraints (hierarchies, reductionism) >>> Cultural and historical biases >>> Linguistic and epistemic factors >>> >>> The ever-changing nature of knowledge >>> >>> >>> 2. Ontology captures and represents 'what exists' * >>> Ontic categories describe what exist >>> >>> >>> 3. MORE ON Lack of Ontological Commitment of Mathematics >>> Fictionalism Mathematics is akin to a story: numbers, sets, and functions are characters or constructs in a narrative. >>> >>> Statements like “2+2=4” are “true” within the story, but there is no metaphysical commitment to numbers actually existing. >>> >>> Hartry Field’s Science Without Numbers demonstrates how physics can be formulated nominalistically, showing mathematics is dispensable to physical ontology. >>> >>> Nominalism Mathematics is a linguistic or conceptual system, describing patterns, relations, or structures without positing entities. >>> >>> Mathematical objects are seen as placeholders or names, not actual beings. >>> >>> Formalism >>> >>> Mathematics consists of symbol manipulation according to rules. >>> >>> Truth is internal to the formal system, not dependent on entities existing in reality. >>> >>> There is no ontological claim beyond the consistency of the formal structure. >>> >>> ________________________________ >>> >>> Implications of Lack of Ontological Commitment >>> >>> Philosophical: Avoids metaphysical debates over the existence of abstract objects. >>> >>> Scientific: Shows that mathematics can be used as a tool for modeling, explanation, and prediction without assuming mathematical objects exist. >>> >>> Epistemic: Shifts focus from discovering “real” entities to understanding structures, patterns, and relations. >>> >>> Practical: Emphasizes that mathematical work is justified by utility, coherence, and explanatory power rather than ontological truth. >>> >>> >>> MORE LIMITATIONS OF CLASSIFICATION SYSTEMS >>> >>> 1. Reductionism >>> >>> Classification systems force complex, multifaceted subjects into predefined, discrete categories. >>> >>> Real-world topics often span multiple domains. >>> >>> Example: “Climate change” involves science, politics, economics, ethics—but often must be placed in one dominant category. >>> >>> Limitation: Nuanced or interdisciplinary knowledge becomes oversimplified. >>> >>> ________________________________ >>> >>> 2. Rigid Hierarchies >>> >>> Most classification systems are hierarchical (trees), assuming that knowledge can be arranged from general → specific. >>> But many fields do not follow clean hierarchies. >>> >>> Consequences: >>> >>> Relationships between subjects that are lateral, cyclical, or network-like are lost. >>> >>> Some topics fit multiple parent categories but must be assigned only one. >>> >>> ________________________________ >>> >>> 3. Cultural Bias and Eurocentrism >>> >>> Many widely used systems were created in Western institutions during specific historical periods. >>> Thus they often reflect: >>> >>> Western cultural priorities >>> >>> Colonial perspectives >>> >>> Christian or Euro-American worldviews >>> >>> Gendered assumptions >>> >>> Examples: >>> >>> Dewey Decimal once grouped non-Christian religions as a single minor section. >>> >>> Indigenous knowledge systems do not map neatly onto Western categorizations. >>> >>> ________________________________ >>> >>> 4. Static Categories in a Dynamic Knowledge Landscape >>> >>> Knowledge evolves, but classification schemes update slowly. >>> >>> Limitations: >>> >>> Emerging fields (e.g., AI ethics, quantum biology) lack appropriate categories. >>> >>> Outdated terminology persists long after it becomes obsolete. >>> >>> ________________________________ >>> >>> 5. Ambiguity and Boundary Problems >>> >>> Subjects don’t always have sharp boundaries. >>> >>> “Digital humanities,” “bioinformatics,” “neuroeconomics”—these hybrid fields strain rigid category structures. >>> >>> Result: Misclassification or forced placement into inadequate categories. >>> >>> ________________________________ >>> >>> 6. Language-Based Constraints >>> >>> Classification systems often depend on the language in which they were originally created. >>> >>> Concepts with no direct translation get misrepresented. >>> >>> Polysemous words (one term, many meanings) complicate categorization. >>> >>> ________________________________ >>> >>> 7. Ethical and Social Framework Limitations >>> >>> Some subjects carry social or moral implications the system fails to handle gracefully. >>> >>> Examples: >>> >>> LGBTQ+ topics historically hidden or marginalized >>> >>> Mental health categories shaped by outdated frameworks >>> >>> Stigmatizing terminology baked into classification labels >>> >>> ________________________________ >>> >>> 8. Practical Space Constraints >>> >>> Especially in library systems: >>> >>> Only a finite number of codes or shelf spaces exist. >>> >>> Broad areas get subdivided excessively; others receive disproportionately little granularity. >>> >>> Outcome: Arbitrary compression or over-expansion. >>> >>> ________________________________ >>> >>> 9. Authority and Gatekeeping >>> >>> Classification presumes that experts can definitively decide how knowledge should be structured. >>> >>> But: >>> >>> Some knowledge systems (e.g., community knowledge or oral traditions) resist systematization. >>> >>> Marginalized groups often have limited influence over classification design. >>> >>> ________________________________ >>> >>> 10. Interoperability Problems >>> >>> Different systems don’t align cleanly. >>> >>> Translating between Dewey, LCC, MeSH, or scientific taxonomies can distort meaning. >>> >>> Metadata loss occurs during crosswalks (mapping between classification systems). >>> >>> >>> >>> >>> However, if it helps, a reminder that it is what is generally accepted, >>> >>> >>> 1. maths is type of KR >>> 2. is not NL KR *which is what we use in LLM >>> >>> Subsumption >>> Subsumption is a key concept in knowledge representation, ontology design, and logic-based AI. It describes a “is-a” hierarchical relationship where one concept is more general and another is more specific. >>> mathematics is a knowledge representation *although it may be understood or defined in other ways >>> >>> because it provides: >>> >>> Formal symbols (numbers, variables, operators) >>> >>> Structured syntax (equations, functions, relations) >>> >>> Precise semantics (well-defined meanings) >>> >>> Inference rules (logical deduction, proof) >>> >>> and much more not related to what we are discussing here >>> >>> >>> Other views may also exist, in the vast universe of discourse, that may or may not contribute to the discussions in hand. >>> . >>> >>
Received on Thursday, 4 December 2025 06:23:34 UTC