Re: K3D TPAC Presentation - Vocabulary Slide for Demo A

Daniel, as per the slide

the ODD domain includes possible domain ontologies
'we have them' means that they exist in the public space
I am not modeling spatial ontologies here
but if this is your field of interest, I am requesting that you start from
presenting the state of the art
as the starting point.
The slot allocation for your talk has changed because what you are talking
about so far
is not very clear in thers of KR,

It seems that each time you offer a pointer or explanation our
planets drift further apart

It would be great if you could make your point it 3 minutes so that we can
frame
further discussions about your contribution accordingly

we can always get back to it later

PDM


On Fri, Nov 14, 2025 at 2:00 PM Daniel Ramos <capitain_jack@yahoo.com>
wrote:

> Paola,
>
> Thank you for the additional clarification. I understand that, from your
> “planet,” familiarity with the space, mechanisms and literature of KR is a
> prerequisite for contributing, and that you are trying to enforce that
> standard consistently. I respect the intent.
>
> What I find difficult is the way that standard has been applied in
> practice in this case.
>
> Over the last weeks, the expectations for my 10–15 minute slot have
> shifted multiple times:
>
> First, a vocabulary slide with 10 terms.
> Then, a short spatial KR contribution aligned with your diagram.
> Then, a state‑of‑the‑art review of spatial ontology (publications,
> vocabularies, uses).
> Then, a live demonstration of the “Multi‑Vibe Code In Chain” methodology,
> without API keys.
> Each of these, taken individually, can be reasonable. Taken together, and
> imposed late in the process, they become a moving target that no new
> contributor can realistically hit—especially an independent engineer who is
> also maintaining a running codebase.
>
> On MVCIC in particular: I did not just “say” I have a methodology. I
> documented it in detail in the K3D repository (for example under
> docs/multi_vibe_orchestration/), and I used it to go from “a few PTX
> kernels” to an architecture with 30+ PTX kernels, tests, and production
> metrics. That is exactly the “AI‑human symbiosis” you describe in your
> October technical note, not as a theory but as a working process. A lot of
> your language about AI helping with the “mechanical tasks” of standards
> while humans handle judgment is very close to what I wrote months earlier
> about Multi‑Vibe Code In Chain.
>
> It’s fair to ask me to explain how this differs from existing work, how it
> fits into decades of KR literature, and how it can be made relevant to
> AI‑KR. That is the kind of discussion I expected in a Community Group. What
> feels unfair is to imply the methodology does not exist until I produce a
> live demo on demand, after already writing extensive documentation and
> pointing you to it multiple times.
>
> You say “we are not asking for a spatial domain ontology here because they
> already exist and we have them.” If so, I would genuinely appreciate
> pointers to the specific spatial ontologies and vocabularies the CG has
> adopted. That would let me either align K3D’s terms with them or explain
> clearly where K3D extends or diverges. At the moment, I’m being told “you
> don’t know the space,” but not being given references beyond “decades of
> literature.”
>
> As an engineer with 30+ years in IT and networking, I’m deeply aware that
> simple‑looking things can be hard to explain, and only because something
> looks simple does not mean it is. The K3D “House / Room / Node / Door /
> Galaxy / Tablet” vocabulary is the visible surface of a framework that
> fuses ideas from quantum/atomic models, systems engineering, computation,
> computer and network architecture, information theory, and spatial KR. I
> have been studying and building across those areas for a long time; I may
> not present it in the same academic language you prefer, but that doesn’t
> mean the work is uninformed.
>
> Aaron Swartz wrote, over twenty years ago, about the danger of W3C
> processes becoming gated by “tribal knowledge” and small circles of
> experts, instead of being open to implementers who bring running code and
> new ideas. Your own technical note acknowledges how mentorship, informal
> wisdom and “who knows whom” can either help or block newcomers. In this
> exchange, I feel both sides of that: on one hand, your desire to mentor; on
> the other, a bar that keeps moving just out of reach.
>
> So to be very concrete:
>
> I accept that you expect contributors to be familiar with KR and spatial
> KR literature.
> I have been working to meet that expectation and am still willing to
> improve my framing and references.
> I have already produced a real methodology (MVCIC) and a real spatial KR
> vocabulary (K3D), with code and tests; these are not just “claims” but
> artifacts you can inspect.
> What I need from the CG, if I am to continue contributing, is a stable and
> realistic definition of what counts as a “useful contribution,” consistent
> with the public mission: e.g., one or two well‑defined concepts with use
> cases and encodings, rather than an ever‑expanding prerequisite list.
> If, after all this, you still feel that I am “not from the right planet”
> to contribute to AI‑KR, that is your prerogative as chair. I will continue
> to build K3D and MVCIC as open work, and I’m confident they will find their
> place in the broader AI and KR communities. But I do believe that, in the
> spirit of making standards and KR more accessible—as you yourself argue in
> your note, and as Swartz argued before—it’s worth reflecting on whether the
> entry barriers we’re imposing here are helping or hurting that goal.
>
> Best regards,
> Daniel
> On 11/14/25 2:49 AM, Paola Di Maio wrote:
>
> Daniel
> I am suggesting that in order to contribute you need to be familiar with
> the space, the mechanism, the field
> From the 10 terms in your slide, I understand that you do not have such
> familiarity *not from the planet where I come from
>
> neither with the spatial domain, nnor with knowledge representation.
> You say that you do things *like a multivibe coding something
> and I think it is fair that you show it today
>
> It is not fair that you say you're going that you have a methodology and
> then not being able to
> show how it works
>
> So please, whatever you'd like to contribute, make it relevant and fit in
> the picture
> No, we are not asking for a spatial domain ontology here *because they
> already exist and we have them
> What I am asking is to explain what you are talking about and how does it
> fit here
> taking into account that there are decades of literature, projects,
> technologies already
> So, as a simple example, when I make a new type of pizza
> I need to be able to explain how is this different from all the other
> pizzas  that exist
>
> To start from the state of the art review, is simply to remind everyone a)
> what are talking about and b) that you
> are familiar with the topic you are trying to make a contribution to
>
> But as I am trying to prepare my own work and do exactly what I am
> suggesting others do *I do eat
> my own dog food - for my overview today, I do not have time to repeat
> myself.
>
> I am sure it will all come to you, as we all gone through that
> orientation stage
> I need to finish my work now, urgently
>
> See you online
>
> PDM
>
>
> On Fri, Nov 14, 2025 at 1:35 PM Mike Bergman <mike@mkbergman.com> wrote:
>
>> Hi All,
>>
>> Daniel, there is something in your approach that caught my eye. My
>> cursory reaction is that I would like to see your ideas framed in more
>> widespread appropriate terms (Peirce, Wheeler, Prigogone), and I'd like to
>> see a more 'marketing' approach to your presentation, given the audience at
>> TPAC. Not knowing what to say, I captured things in your own words across
>> this thread and asked one of our LLM friends to summarize:
>>
>> *The K3D spatial vocabulary brings a fresh and practical contribution to
>> the AI–Knowledge-Representation community by standardizing how we describe
>> 3D knowledge environments—Houses, Rooms, Doors, Nodes, and Galaxies—that
>> both humans and AI agents use to navigate, cluster, and reason over complex
>> information. Positioned squarely within the “Domain Ontologies / ODD” layer
>> of the AI-KR landscape, K3D provides a coherent and interoperable
>> vocabulary for representing spatial organization of knowledge, with clean
>> links to KR learning via embedded vectors and to reliability engineering
>> through boundary and access constraints. By offering a shared language for
>> modeling spatial knowledge structures, K3D helps unify disparate AI-KR
>> practices and makes it easier to exchange, annotate, and validate knowledge
>> resources across tools, research efforts, and organizations.*
>>
>> *For the wider community, K3D delivers immediate value by offering
>> concrete, reproducible use cases that demonstrate how spatial ontologies
>> improve clarity, collaboration, and machine interpretability. Whether
>> representing an “AI-KR House” of vocabularies as interconnected Rooms and
>> Nodes, visualizing how embeddings cluster in a Galaxy, or defining typed
>> Doors that link related subdomains, K3D shows how 3D conceptual spaces can
>> make complex KR artifacts more navigable and actionable. Its methodology
>> supports transparent vocabulary development, repeatable annotation
>> workflows, and a clear pathway for integrating symbolic and learned
>> representations. In short, K3D helps professionals move beyond flat,
>> fragmented documentation toward structured, extensible, and
>> standards-aligned environments that advance the state of AI-KR practice*.
>>
>> Obviously, I would tone this down, recognize competing approaches and the
>> history, shorten it, and remove those 'breathy' aspects common to LLMs.
>> Nonetheless, it helped me to better understand and contextualize what you
>> are doing.
>>
>> Good luck with the presentation.
>>
>> Best, Mike
>> On 11/13/2025 11:11 PM, Paola Di Maio wrote:
>>
>> Daniel
>> thanks for your background
>>
>> it is important to understand where members come from, but ultimately,
>> where we come from does not matter much
>> *and we may not have time to listen toe everybody's story,
>>
>> the meeting has a section for open floor, which means you are free to
>> talk about your interest and goals
>>
>> But it sounds that you have not yet cleared in your mind as to what you
>> want to contribute and how,
>> probably because you are not yet familiar with what we are doing here KR
>> and how *which we confess is not always clear to me either but at least we
>> have spent time working it out
>>
>> The topic of interest here is not 3KD, but the KR for spatial domain,
>> that it aims to represent
>> To be able to make contributions to your field you need to be familiar
>> with:
>> a) what is a useful contribution, for me one or two concepts and terms
>> well defined and with use cases could be useful contributions
>> b) how to make a contribution *how to conduct a state of the art review
>> for example, and how to communicate your results meaningfully
>> c) how to pitch your proposed contribution to the work being done
>>
>> So you need to take into account all of these things.  The reason why I
>> am engaging with you here on this
>> is because I myself, have benefited immensely from being mentored by
>> others
>> John Sowa is the best example for me, but everyone who has taken time to
>> point me in the right direction
>> throughout my journey has been, and still is, my mentor
>>
>> Now we use LLM to point us, but they can be misleading
>>
>> The demo i would be interested in is the orchestration that you mentioned
>> multiple times
>> a demonstration of how multiple LLMs can be queried meaningfully without
>> API keys
>>
>> if you can do it, please show it
>> if not, show it to us in the future when you can do it
>>
>> P
>>
>>
>>
>> On Fri, Nov 14, 2025 at 12:43 PM Daniel Ramos <capitain_jack@yahoo.com>
>> wrote:
>>
>>> Paola,
>>>
>>> Thank you for your apology and for offering the 10‑minute slot.
>>>
>>> I appreciate you taking responsibility for the earlier tone, and I’m
>>> glad we can try to bring this back to a constructive technical discussion.
>>>
>>> On the AI‑generated materials:
>>> I do use AI assistants to help me write in professional English and to
>>> structure long, complex thoughts.
>>> English is not my first language, and I do not treat AI the way I treat
>>> a compiler, an IDE, or a spell‑checker: as a tool, AI is a partner that I
>>> direct.
>>>
>>> The architecture, the vocabulary, and the standards proposals come from
>>> my own work over many months;
>>>
>>> AI helps me express that work more clearly, it does not decide what I
>>> think.
>>>
>>> A bit more on where I am coming from technically:
>>>
>>> I’m self‑employed, working from Cidade Estrutural, and I paid for my GPU
>>> and AI usage out of my own pocket.
>>> My inspirations are very concrete: Apollo 11 engineering and code,
>>> computers history, the game industry, and especially the demoscene – for
>>> example, the famous early‑2000s FPS/demos that fit into tens of kilobytes
>>> by storing procedures instead of raw assets.
>>> That led directly to K3D’s procedural compression: store “how to
>>> reconstruct” knowledge on GPU, rather than huge raw embedding arrays (not
>>> Milton work, that was a happy coincidence).
>>>
>>> I also research a lot about learning methodology.
>>>
>>> I designed the training pipeline like teaching a child: we are currently
>>> ingesting atomic knowledge (characters, punctuation, math symbols) so we
>>> can later build up to words, phrases and texts inside a coherent spatial
>>> memory.
>>>
>>> From that perspective, PTX is not a buzzword for me, it’s a deliberate
>>> engineering choice.
>>>
>>> Hand‑written PTX kernels are rare because they are the GPU equivalent of
>>> assembly: most people rely on cuBLAS/cuDNN and high‑level frameworks.
>>>
>>> The reason DeepSeek’s work on a single PTX kernel attracted so much
>>> attention is exactly that – very few teams are willing or able to go down
>>> to that level to squeeze out performance and control.
>>>
>>> K3D intentionally pushes reasoning and memory operations into PTX so the
>>> logic is fast, transparent and reproducible on consumer hardware, not
>>> hidden in black‑box libraries.
>>>
>>> I also see AI as more than “text prediction”.
>>>
>>> The line of work on world models and video‑based predictive models shows
>>> that modern systems are learning internal models of environments, not just
>>> token sequences, and they already noticed that the AI future is at least
>>> virtually embodied.
>>>
>>> K3D is my attempt to give those systems – and humans – a shared spatial
>>> KR substrate: Houses, Rooms, Nodes, Doors, Galaxy, a Tablet.
>>>
>>> In your diagram, that vocabulary lives in the Domain Ontologies / ODD
>>> space (with links to KR learning and reliability), and that is the part I
>>> have been trying to contribute to this CG.
>>>
>>> Regarding today’s session and the demo:
>>> Given where the implementation is right now, I do not have a polished,
>>> self‑contained demo that meets the expectations you outlined (state of the
>>> art, open vocabularies list, validated use cases and a live system) ready
>>> for TPAC.
>>>
>>> We are still in the phase of training atomic knowledge and integrating
>>> procedural compression; the viewer and spatial structures exist, but I
>>> would rather not present an improvised demo that doesn’t meet your
>>> standards or mine.
>>>
>>> So, to use our time respectfully, I propose the following for the
>>> 10‑minute slot you offered:
>>>
>>> I use the time to explain “where I’m coming from”:
>>> my background, the historical inspirations (Apollo, demoscene,
>>> engineering, teaching), and how that led to a spatial KR architecture that
>>> *overlaps* but *does not depend on this CG*.
>>>
>>> I summarize, very concretely, how the K3D spatial vocabulary maps into
>>> your diagram: it is a domain ontology for spatial knowledge environments
>>> (Houses, Rooms, Nodes, Doors, Galaxy, Tablet) in the ODD space, with clear
>>> edges to KR learning and reliability.
>>>
>>> If you prefer to keep the slot only for a brief Q&A instead of a full
>>> 10‑minute overview, I will of course respect that.
>>>
>>> My goal is simply to explain my position clearly once, within your
>>> scope, and then let the group decide whether this spatial KR perspective is
>>> useful for AI‑KR going forward.
>>>
>>> Thank you again for the apology and for the opportunity to clarify these
>>> points.
>>>
>>> Best regards,
>>> Daniel
>>> On 11/14/25 1:21 AM, Paola Di Maio wrote:
>>>
>>> Daniel
>>> I completely apologise for my tone and for suggesting enrolment in my
>>> courses
>>>
>>> This was intended ironically and resulting of my own frustration in
>>> reading your AI generated materials and responses
>>>
>>> I rephrase:  it seems that you are not familiar with the spatial
>>> knowledge representation domain
>>> and I suggest you familiarise yourself with the learning resources
>>> available
>>> *it is true tho that if you need specific guidance from me you need to
>>> enrol in one of my courses
>>>
>>>
>>> I am glad to receive an email from you that sounds written by a human,
>>> expressing human concerns
>>> what about if in your 10 minutes slot today we discuss where you are
>>> coming from *that is what is colloquially referred to as coming from
>>> another planet, and you get the chance to air exactly all the points that
>>> you state in your email below
>>> PLUS give a demo
>>> and we can take things from there?
>>>
>>> I apologise sincerely for causing offense and once again thank you for
>>> stepping up and enabling this exchange
>>>
>>> P
>>>
>>> --
>> __________________________________________
>>
>> Michael K. Bergman
>> 319.621.5225http://mkbergman.comhttp://www.linkedin.com/in/mkbergman
>> __________________________________________
>>
>>

Received on Friday, 14 November 2025 06:12:35 UTC