Reification UX and Spatial UI Architecture (K3D)

Adam,

Thank you for the thoughtful UX exploration around “article about 
whether an Action adheresTo a Rule”.

That’s exactly the kind of interaction pattern the UI side of K3D is 
meant to host.

I’ve been working on a draft Spatial UI Architecture Specification 
(SUAS) for K3D, which describes how we treat the interface as a 
navigable 3D “house” rather than a set of 2D forms.

The spec lives primarily here:

https://github.com/danielcamposramos/Knowledge3D/blob/main/docs/vocabulary/SPATIAL_UI_ARCHITECTURE_SPECIFICATION.md

But also here:

https://www.w3.org/community/aikr/wiki/K3D

 From a UI/UX perspective, the relevant pieces are:

Houses and Rooms:

Content and concepts live as objects in rooms (Library, Workshop, 
Knowledge Gardens, etc.), so Actions and Rules are tangible items you 
can select, inspect, and relate—not just URIs in a form.

Memory Tablet:

A persistent tablet‑style interface the user always has in hand. It 
presents lists, search, forms, and relation builders in a conventional 
way, but anchored in spatial context (you’re “in” the house as you work).

Reified Statements as Objects:

The pattern you sketched—“form relation between things”, pick subject, 
pick object, then select a relation from configured ontologies—is 
modeled as creating a new object in the Workshop or Gardens. That object 
is the reified statement (“R(subj,obj)”) and can itself be the “thing 
your article is about”, indexed and searchable.

AI‑assisted metadata:

As you suggest, the Tablet is meant to host AI‑assisted helpers: while 
the user edits or publishes, the system proposes candidate “R(subj,obj)” 
statements and other metadata based on what’s in view and what’s active 
in memory.
The user then confirms/refines, instead of manually reconstructing 
complex metadata from scratch.

On the implementation side, the spec deliberately borrows from game 
engines for both performance and familiarity:

We use a house/room/door structure much like levels and zones in games:
rooms map to contexts (Library, Workshop, Bathtub, Living Room, 
Gardens), and doors act as scene boundaries/loading points.

Techniques like LOD (Level of Detail) and frustum/occlusion culling are 
used to keep the UI responsive even when a lot of knowledge objects are 
present—similar to how open‑world games manage huge environments.

The goal is that the UX feels more like navigating a well‑designed game 
level than configuring a dense admin console: you “walk” to where 
knowledge lives, pick things up, place them, and create relations, 
instead of clicking through deep menu trees.

This also gives us a natural path to VR/AR:
the same SUAS model can be rendered as flat 3D on a monitor today or as 
immersive spaces via WebXR later, with the Memory Tablet acting as the 
consistent HUD/control surface.

So SUAS is trying to give a concrete UI language for exactly the kind of 
pattern you described:

The selection flows (choose Action, choose Rule, choose relation) live 
in the Tablet’s GUI, the results become first‑class spatial objects 
(reified statements that articles can be “about” and that search/Q&A can 
target directly), and AI can sit “next to” the user in that same house, 
proposing and refining metadata rather than asking the author to 
manually construct every statement.

If you have time, I’d really value your feedback on SUAS from a UX and 
Semantic Web perspective—especially on:

The “reified statement as object” pattern, and how best to expose these 
relation‑building flows in authoring tools so that we get the benefits 
you describe (better indexing, Q&A, environmental impacts, etc.) without 
overwhelming end‑users.

Best regards,
Daniel

Received on Wednesday, 19 November 2025 22:37:23 UTC