Re: Misconceptions about what knowledge representation truly is

Milton, Dave, all,

I resonate strongly with the idea that a central task for this CG is to 
narrow down what we mean by “knowledge that is mathematically 
representable in machine‑readable format,” and to tie that explicitly to 
domains of discourse and adequacy rather than to an illusion of total 
coverage.

 From an engineering perspective, I tend to say: engineering is the art 
of approximation.

In practice that means:

We work with bounded domains of discourse, not “all knowledge”;

We accept that any KR artifact is an approximate model of that domain, 
under specific hardware, energy and latency constraints;

We judge it by adequacy for a purpose (in this domain, for these tasks), 
not by an unreachable notion of completeness.

In my own experiments with spatial KR and GPU‑native implementations, 
I’ve found that being honest about those boundaries changes the design: 
you naturally start to treat “House / domain / discourse” as a 
first‑class object, you track fidelity of approximations explicitly, and 
you stop believing that just scaling dimensions or parameters will 
somehow converge to “true” knowledge.

That seems very consistent with the Godelian view Milton is bringing in 
and with Dave’s distinction between formal explainability and practical 
adequacy. I’d be very interested in seeing this CG adopt something like:

A requirement that every KR artifact we discuss identifies its domain of 
discourse,
An explicit statement of its adequacy assumptions (what it’s good enough 
for, and under which constraints),
And clear separation between what we can model mathematically and what 
we have to leave to human judgment and language.

To me, that would go a long way toward “setting the record straight” on 
KR in the era of large models, without pretending that we can or should 
represent everything.

Best regards,
Daniel

Received on Tuesday, 18 November 2025 10:39:43 UTC