Re: K3D TPAC Presentation - Vocabulary Slide for Demo A

Paola,

Thank you for your apology and for offering the 10‑minute slot.

I appreciate you taking responsibility for the earlier tone, and I’m 
glad we can try to bring this back to a constructive technical discussion.

On the AI‑generated materials:
I do use AI assistants to help me write in professional English and to 
structure long, complex thoughts.
English is not my first language, and I do not treat AI the way I treat 
a compiler, an IDE, or a spell‑checker: as a tool, AI is a partner that 
I direct.

The architecture, the vocabulary, and the standards proposals come from 
my own work over many months;

AI helps me express that work more clearly, it does not decide what I think.

A bit more on where I am coming from technically:

I’m self‑employed, working from Cidade Estrutural, and I paid for my GPU 
and AI usage out of my own pocket.
My inspirations are very concrete: Apollo 11 engineering and code, 
computers history, the game industry, and especially the demoscene – for 
example, the famous early‑2000s FPS/demos that fit into tens of 
kilobytes by storing procedures instead of raw assets.
That led directly to K3D’s procedural compression: store “how to 
reconstruct” knowledge on GPU, rather than huge raw embedding arrays 
(not Milton work, that was a happy coincidence).

I also research a lot about learning methodology.

I designed the training pipeline like teaching a child: we are currently 
ingesting atomic knowledge (characters, punctuation, math symbols) so we 
can later build up to words, phrases and texts inside a coherent spatial 
memory.

 From that perspective, PTX is not a buzzword for me, it’s a deliberate 
engineering choice.

Hand‑written PTX kernels are rare because they are the GPU equivalent of 
assembly: most people rely on cuBLAS/cuDNN and high‑level frameworks.

The reason DeepSeek’s work on a single PTX kernel attracted so much 
attention is exactly that – very few teams are willing or able to go 
down to that level to squeeze out performance and control.

K3D intentionally pushes reasoning and memory operations into PTX so the 
logic is fast, transparent and reproducible on consumer hardware, not 
hidden in black‑box libraries.

I also see AI as more than “text prediction”.

The line of work on world models and video‑based predictive models shows 
that modern systems are learning internal models of environments, not 
just token sequences, and they already noticed that the AI future is at 
least virtually embodied.

K3D is my attempt to give those systems – and humans – a shared spatial 
KR substrate: Houses, Rooms, Nodes, Doors, Galaxy, a Tablet.

In your diagram, that vocabulary lives in the Domain Ontologies / ODD 
space (with links to KR learning and reliability), and that is the part 
I have been trying to contribute to this CG.

Regarding today’s session and the demo:
Given where the implementation is right now, I do not have a polished, 
self‑contained demo that meets the expectations you outlined (state of 
the art, open vocabularies list, validated use cases and a live system) 
ready for TPAC.

We are still in the phase of training atomic knowledge and integrating 
procedural compression; the viewer and spatial structures exist, but I 
would rather not present an improvised demo that doesn’t meet your 
standards or mine.

So, to use our time respectfully, I propose the following for the 
10‑minute slot you offered:

I use the time to explain “where I’m coming from”:
my background, the historical inspirations (Apollo, demoscene, 
engineering, teaching), and how that led to a spatial KR architecture 
that *overlaps* but *does not depend on this CG*.

I summarize, very concretely, how the K3D spatial vocabulary maps into 
your diagram: it is a domain ontology for spatial knowledge environments 
(Houses, Rooms, Nodes, Doors, Galaxy, Tablet) in the ODD space, with 
clear edges to KR learning and reliability.

If you prefer to keep the slot only for a brief Q&A instead of a full 
10‑minute overview, I will of course respect that.

My goal is simply to explain my position clearly once, within your 
scope, and then let the group decide whether this spatial KR perspective 
is useful for AI‑KR going forward.

Thank you again for the apology and for the opportunity to clarify these 
points.

Best regards,
Daniel

On 11/14/25 1:21 AM, Paola Di Maio wrote:
> Daniel
> I completely apologise for my tone and for suggesting enrolment in my 
> courses
>
> This was intended ironically and resulting of my own frustration in 
> reading your AI generated materials and responses
>
> I rephrase:  it seems that you are not familiar with the spatial 
> knowledge representation domain
> and I suggest you familiarise yourself with the learning resources 
> available
> *it is true tho that if you need specific guidance from me you need to 
> enrol in one of my courses
>
>
> I am glad to receive an email from you that sounds written by a human, 
> expressing human concerns
> what about if in your 10 minutes slot today we discuss where you are 
> coming from *that is what is colloquially referred to as coming from 
> another planet, and you get the chance to air exactly all the points 
> that you state in your email below
> PLUS give a demo
> and we can take things from there?
>
> I apologise sincerely for causing offense and once again thank you for 
> stepping up and enabling this exchange
>
> P
>

Received on Friday, 14 November 2025 04:43:58 UTC