- From: Mike Bergman <mike@mkbergman.com>
- Date: Thu, 13 Nov 2025 23:35:44 -0600
- To: Paola Di Maio <paoladimaio10@gmail.com>, Daniel Ramos <capitain_jack@yahoo.com>
- Cc: "public-aikr@w3.org" <public-aikr@w3.org>, Milton Ponson <rwiciamsd@gmail.com>
- Message-ID: <5c92e819-0afc-44e5-a8bc-1374474f00cf@mkbergman.com>
Hi All, Daniel, there is something in your approach that caught my eye. My cursory reaction is that I would like to see your ideas framed in more widespread appropriate terms (Peirce, Wheeler, Prigogone), and I'd like to see a more 'marketing' approach to your presentation, given the audience at TPAC. Not knowing what to say, I captured things in your own words across this thread and asked one of our LLM friends to summarize: > /The K3D spatial vocabulary brings a fresh and practical contribution > to the AI–Knowledge-Representation community by standardizing how we > describe 3D knowledge environments—Houses, Rooms, Doors, Nodes, and > Galaxies—that both humans and AI agents use to navigate, cluster, and > reason over complex information. Positioned squarely within the > “Domain Ontologies / ODD” layer of the AI-KR landscape, K3D provides a > coherent and interoperable vocabulary for representing spatial > organization of knowledge, with clean links to KR learning via > embedded vectors and to reliability engineering through boundary and > access constraints. By offering a shared language for modeling spatial > knowledge structures, K3D helps unify disparate AI-KR practices and > makes it easier to exchange, annotate, and validate knowledge > resources across tools, research efforts, and organizations./ > > /For the wider community, K3D delivers immediate value by offering > concrete, reproducible use cases that demonstrate how spatial > ontologies improve clarity, collaboration, and machine > interpretability. Whether representing an “AI-KR House” of > vocabularies as interconnected Rooms and Nodes, visualizing how > embeddings cluster in a Galaxy, or defining typed Doors that link > related subdomains, K3D shows how 3D conceptual spaces can make > complex KR artifacts more navigable and actionable. Its methodology > supports transparent vocabulary development, repeatable annotation > workflows, and a clear pathway for integrating symbolic and learned > representations. In short, K3D helps professionals move beyond flat, > fragmented documentation toward structured, extensible, and > standards-aligned environments that advance the state of AI-KR practice/. > Obviously, I would tone this down, recognize competing approaches and the history, shorten it, and remove those 'breathy' aspects common to LLMs. Nonetheless, it helped me to better understand and contextualize what you are doing. Good luck with the presentation. Best, Mike On 11/13/2025 11:11 PM, Paola Di Maio wrote: > Daniel > thanks for your background > > it is important to understand where members come from, but ultimately, > where we come from does not matter much > *and we may not have time to listen toe everybody's story, > > the meeting has a section for open floor, which means you are free to > talk about your interest and goals > > But it sounds that you have not yet cleared in your mind as to what > you want to contribute and how, > probably because you are not yet familiar with what we are doing here > KR and how *which we confess is not always clear to me either but at > least we have spent time working it out > > The topic of interest here is not 3KD, but the KR for spatial domain, > that it aims to represent > To be able to make contributions to your field you need to be familiar > with: > a) what is a useful contribution, for me one or two concepts and terms > well defined and with use cases could be useful contributions > b) how to make a contribution *how to conduct a state of the art > review for example, and how to communicate your results meaningfully > c) how to pitch your proposed contribution to the work being done > > So you need to take into account all of these things. The reason why > I am engaging with you here on this > is because I myself, have benefited immensely from being mentored by > others > John Sowa is the best example for me, but everyone who has taken time > to point me in the right direction > throughout my journey has been, and still is, my mentor > > Now we use LLM to point us, but they can be misleading > > The demo i would be interested in is the orchestration that you > mentioned multiple times > a demonstration of how multiple LLMs can be queried meaningfully > without API keys > > if you can do it, please show it > if not, show it to us in the future when you can do it > > P > > > > On Fri, Nov 14, 2025 at 12:43 PM Daniel Ramos > <capitain_jack@yahoo.com> wrote: > > Paola, > > Thank you for your apology and for offering the 10‑minute slot. > > I appreciate you taking responsibility for the earlier tone, and > I’m glad we can try to bring this back to a constructive technical > discussion. > > On the AI‑generated materials: > I do use AI assistants to help me write in professional English > and to structure long, complex thoughts. > English is not my first language, and I do not treat AI the way I > treat a compiler, an IDE, or a spell‑checker: as a tool, AI is a > partner that I direct. > > The architecture, the vocabulary, and the standards proposals come > from my own work over many months; > > AI helps me express that work more clearly, it does not decide > what I think. > > A bit more on where I am coming from technically: > > I’m self‑employed, working from Cidade Estrutural, and I paid for > my GPU and AI usage out of my own pocket. > My inspirations are very concrete: Apollo 11 engineering and code, > computers history, the game industry, and especially the demoscene > – for example, the famous early‑2000s FPS/demos that fit into tens > of kilobytes by storing procedures instead of raw assets. > That led directly to K3D’s procedural compression: store “how to > reconstruct” knowledge on GPU, rather than huge raw embedding > arrays (not Milton work, that was a happy coincidence). > > I also research a lot about learning methodology. > > I designed the training pipeline like teaching a child: we are > currently ingesting atomic knowledge (characters, punctuation, > math symbols) so we can later build up to words, phrases and texts > inside a coherent spatial memory. > > From that perspective, PTX is not a buzzword for me, it’s a > deliberate engineering choice. > > Hand‑written PTX kernels are rare because they are the GPU > equivalent of assembly: most people rely on cuBLAS/cuDNN and > high‑level frameworks. > > The reason DeepSeek’s work on a single PTX kernel attracted so > much attention is exactly that – very few teams are willing or > able to go down to that level to squeeze out performance and control. > > K3D intentionally pushes reasoning and memory operations into PTX > so the logic is fast, transparent and reproducible on consumer > hardware, not hidden in black‑box libraries. > > I also see AI as more than “text prediction”. > > The line of work on world models and video‑based predictive models > shows that modern systems are learning internal models of > environments, not just token sequences, and they already noticed > that the AI future is at least virtually embodied. > > K3D is my attempt to give those systems – and humans – a shared > spatial KR substrate: Houses, Rooms, Nodes, Doors, Galaxy, a Tablet. > > In your diagram, that vocabulary lives in the Domain Ontologies / > ODD space (with links to KR learning and reliability), and that is > the part I have been trying to contribute to this CG. > > Regarding today’s session and the demo: > Given where the implementation is right now, I do not have a > polished, self‑contained demo that meets the expectations you > outlined (state of the art, open vocabularies list, validated use > cases and a live system) ready for TPAC. > > We are still in the phase of training atomic knowledge and > integrating procedural compression; the viewer and spatial > structures exist, but I would rather not present an improvised > demo that doesn’t meet your standards or mine. > > So, to use our time respectfully, I propose the following for the > 10‑minute slot you offered: > > I use the time to explain “where I’m coming from”: > my background, the historical inspirations (Apollo, demoscene, > engineering, teaching), and how that led to a spatial KR > architecture that *overlaps* but *does not depend on this CG*. > > I summarize, very concretely, how the K3D spatial vocabulary maps > into your diagram: it is a domain ontology for spatial knowledge > environments (Houses, Rooms, Nodes, Doors, Galaxy, Tablet) in the > ODD space, with clear edges to KR learning and reliability. > > If you prefer to keep the slot only for a brief Q&A instead of a > full 10‑minute overview, I will of course respect that. > > My goal is simply to explain my position clearly once, within your > scope, and then let the group decide whether this spatial KR > perspective is useful for AI‑KR going forward. > > Thank you again for the apology and for the opportunity to clarify > these points. > > Best regards, > Daniel > > On 11/14/25 1:21 AM, Paola Di Maio wrote: >> Daniel >> I completely apologise for my tone and for suggesting enrolment >> in my courses >> >> This was intended ironically and resulting of my own frustration >> in reading your AI generated materials and responses >> >> I rephrase: it seems that you are not familiar with the spatial >> knowledge representation domain >> and I suggest you familiarise yourself with the learning >> resources available >> *it is true tho that if you need specific guidance from me you >> need to enrol in one of my courses >> >> >> I am glad to receive an email from you that sounds written by a >> human, expressing human concerns >> what about if in your 10 minutes slot today we discuss where you >> are coming from *that is what is colloquially referred to as >> coming from another planet, and you get the chance to air exactly >> all the points that you state in your email below >> PLUS give a demo >> and we can take things from there? >> >> I apologise sincerely for causing offense and once again thank >> you for stepping up and enabling this exchange >> >> P >> -- __________________________________________ Michael K. Bergman 319.621.5225 http://mkbergman.com http://www.linkedin.com/in/mkbergman __________________________________________
Received on Friday, 14 November 2025 05:35:54 UTC