Spatial KR: how we encode domains, concepts and relations in K3D

Dear all,

In light of the recent discussion about what counts as “knowledge 
representation” (and how it sits above implementation details like 
PTX/CUDA), I’ve written up a concise specification of how K3D encodes KR 
concepts into a spatial form that both humans and machines can work with.

The new doc is here:

docs/SPATIAL_KR_VISUAL_ENCODING.md
https://github.com/danielcamposramos/Knowledge3D/blob/main/docs/SPATIAL_KR_VISUAL_ENCODING.md
This is not about kernels or GPU backends; it’s about the representation 
layer: how domains of discourse, vocabularies and relations appear in 3D 
space, and how time/adequacy are made visible.

Very briefly:

Domains of discourse

Houses and Rooms represent bounded domains (e.g., AI‑KR, neuroscience, 
reliability).
Garden zones and Museum rooms further partition those domains (current 
vs archival, core vs experimental).

Concepts and vocabularies

Each concept is a Node/“star” with an identifier, metadata, and embeddings.
Shape encodes modality (text/image/audio/video); this mapping is 
consistent across Galaxy and Knowledge Garden.
Ontologies appear as trees in the Garden (roots/branches/leaves), guided 
by embeddings but with explicit parent→child edges.

Relations and features

Rays and edges encode relations:
direction → orientation toward neighbors/parents/prototypes;
length → strength / span of the relation;
thickness → weight (e.g., frequency or subtree mass);
style → type (straight = structural, slightly curly = associative, very 
curly = speculative);
color → modality + “temperature” (recency/activity).

Time and adequacy cues

Each node/relation carries created_at, last_updated, last_accessed, etc.
Stars glow and fade with activity; rays change temperature over time.
The Garden/Museum separation makes it clear what is “living” ontology vs 
archived history.

Garden and Museum

The Knowledge Garden is the “ontology greenhouse”: circular, zone‑based, 
with fractal trees guided by semantics.
The Museum holds deprecated or very large structures, including “portal 
cubes” that stand in for whole archived galaxies/houses (with metadata 
and on‑demand loading).

The idea is that:

a sighted human can “read” domains, concepts, relations, recency and 
status from the spatial encoding;
AI systems read the same underlying metadata, embeddings and logs;
all of this sits at the KR level: it’s about how we structure and 
present domains of discourse and their vocabularies, not about any 
particular GPU ISA.

In other words, this document is an attempt to make explicit the design 
choices behind phrases like “stars and galaxies” and “knowledge 
gardens,” and to show how they are deliberately tied back to domains of 
discourse, adequacy, and neurosymbolic integration – the themes that 
Dave and Milton have been raising.

If this kind of spatial encoding of vocabularies and relations is of 
interest, I’d be glad to iterate on the document with the group and, if 
appropriate, align it with any emerging AI‑KR vocabulary work 
(RDF/JSON‑LD terms for rays, domains, time, etc.).

Best regards,
Daniel

Received on Sunday, 16 November 2025 06:46:15 UTC