- From: Daniel Ramos <capitain_jack@yahoo.com>
- Date: Fri, 14 Nov 2025 02:34:27 -0300
- To: Paola Di Maio <paoladimaio10@gmail.com>
- Cc: "public-aikr@w3.org" <public-aikr@w3.org>, Milton Ponson <rwiciamsd@gmail.com>
- Message-ID: <5d438d32-6095-48f0-acee-912eae7b0877@yahoo.com>
Paola, Thank you for the additional context and for sharing the AI‑KR CG mission pages and your October 30th technical note on “The Human Process of Web Standards Generation.” I agree with much of what you describe there: the 4–7 year timeline, the hidden costs, the consensus work, and the “human element” in standards. It’s precisely that slow, human‑centered process that motivated me to build K3D and the Multi‑Vibe Code in Chain (MVCIC) methodology: not as a rejection of that process, but as a concrete attempt to augment it with AI partners. You write that the future lies in an “AI‑human symbiotic relationship that augments human judgment and wisdom while accelerating mechanical tasks.” I agree completely. That is exactly what MVCIC is: a human architect orchestrating multiple AI assistants to co‑draft specs, code, tests and documentation in parallel, with the human retaining final judgment. It’s not “AI doing the thinking,” it’s AI amplifying the human in the parts we used to do alone. In that sense, MVCIC is not “another planet”—it is an implementation of the symbiotic vision you describe, running today. K3D is one concrete standards‑adjacent outcome of that process. This is also why I find the current bar you’re setting for “one or two concepts” confusing. The AI‑KR CG public description lists: A comprehensive list of open access resources in AI and KR A concept map of the domain A natural language vocabulary for aspects of AI One or more machine‑language encodings of that vocabulary Methods for KR management, especially natural language learning / semantic memory These are all things the group as a whole has been working toward for years. They are not trivial. Requiring a new participant to first produce a full state‑of‑the‑art review of spatial ontologies, list all open spatial vocabularies, enumerate “VALID use cases”, and explain how their work fits into all existing spatial KR, before allowing them to contribute even one or two terms, goes far beyond what the CG charter and W3C’s own “who should join” guidance describe. Aaron Swartz argued, early in W3C’s history, that the web’s health depends on lowering barriers for implementers and letting running code and real artifacts speak, rather than concentrating power in a small circle of gatekeepers. Your own note recognizes the danger of “tribal wisdom” and the difficulty new participants face breaking into standards work. I’m feeling that tension very directly here. To be very clear: I have already proposed exactly what you describe as a useful contribution in (a): a small spatial KR vocabulary for Houses, Rooms, Nodes, Doors, Galaxy, Tablet, with use cases and implementation. It lives in the intersection of AI and KR: it’s a domain ontology for spatial knowledge environments used by an AI system and humans. I’ve aligned it with your diagram: it sits in the Domain Ontologies / ODD space, with explicit links to KR learning and reliability. There is already running code, test suites, and production metrics behind it in the K3D repository. Instead of discussing whether those particular terms and use cases are good or bad, we keep returning to ever‑shifting meta‑requirements: first a vocabulary slide, then a spatial SOTA, then an orchestration demo, then proof that I understand “what KR is,” then proof that I’m not “just using AI to generate text.” I don’t see anywhere in the CG description, or in W3C’s general guidance on Community Groups, a requirement that members demonstrate mastery of KR under one person’s criteria, or produce a survey‑level paper, before being allowed to contribute a couple of terms and use cases. The mission is explicitly about exploring requirements and options, not about enforcing a curriculum. On the orchestration demo: MVCIC is indeed an orchestration of multiple LLMs, but it’s not a party trick. It depends on prompt engineering, long‑context management, careful chaining, and explicit human oversight—exactly the kind of AI‑KR issues that are still being named and formalized in the wider community. I’m happy to document that process in a way that fits the CG’s scope, but that’s a research and standardization topic in itself, not just a “show me something cool” demo. So, I’d like to propose a reset on what “useful contribution” means here: If the AI‑KR CG wants a state‑of‑the‑art survey of spatial KR and open spatial vocabularies, that’s a legitimate project, but it should be scoped as such, with clear expectations and shared credit— not as a private gate for one member’s participation. If, instead, the CG is willing to accept the kind of contribution described on its own pages—a small, well‑defined vocabulary proposal plus use cases and encodings—then K3D’s spatial vocabulary is one such proposal. We can evaluate it on technical merits: does it help conceptualize and specify AI domain knowledge in spatial form, or not? I am still willing to contribute on those terms: one or two concepts at a time, well defined, with use cases and machine representations, evaluated against the CG’s stated mission rather than shifting, informal tests. If that isn’t compatible with how you want to run AI‑KR, I will respect that and focus my energy on other venues. But if we’re serious about AI‑human symbiosis in standards, and about lowering the barriers that you yourself describe in your note, then we need to make space for implementers who bring running systems and new methodologies like MVCIC to the table, even if they did not come up through the same academic path. Best regards, Daniel On 11/14/25 2:11 AM, Paola Di Maio wrote: > Daniel > thanks for your background > > it is important to understand where members come from, but ultimately, > where we come from does not matter much > *and we may not have time to listen toe everybody's story, > > the meeting has a section for open floor, which means you are free to > talk about your interest and goals > > But it sounds that you have not yet cleared in your mind as to what > you want to contribute and how, > probably because you are not yet familiar with what we are doing here > KR and how *which we confess is not always clear to me either but at > least we have spent time working it out > > The topic of interest here is not 3KD, but the KR for spatial domain, > that it aims to represent > To be able to make contributions to your field you need to be familiar > with: > a) what is a useful contribution, for me one or two concepts and terms > well defined and with use cases could be useful contributions > b) how to make a contribution *how to conduct a state of the art > review for example, and how to communicate your results meaningfully > c) how to pitch your proposed contribution to the work being done > > So you need to take into account all of these things. The reason why > I am engaging with you here on this > is because I myself, have benefited immensely from being mentored by > others > John Sowa is the best example for me, but everyone who has taken time > to point me in the right direction > throughout my journey has been, and still is, my mentor > > Now we use LLM to point us, but they can be misleading > > The demo i would be interested in is the orchestration that you > mentioned multiple times > a demonstration of how multiple LLMs can be queried meaningfully > without API keys > > if you can do it, please show it > if not, show it to us in the future when you can do it > > P > > > > On Fri, Nov 14, 2025 at 12:43 PM Daniel Ramos > <capitain_jack@yahoo.com> wrote: > > Paola, > > Thank you for your apology and for offering the 10‑minute slot. > > I appreciate you taking responsibility for the earlier tone, and > I’m glad we can try to bring this back to a constructive technical > discussion. > > On the AI‑generated materials: > I do use AI assistants to help me write in professional English > and to structure long, complex thoughts. > English is not my first language, and I do not treat AI the way I > treat a compiler, an IDE, or a spell‑checker: as a tool, AI is a > partner that I direct. > > The architecture, the vocabulary, and the standards proposals come > from my own work over many months; > > AI helps me express that work more clearly, it does not decide > what I think. > > A bit more on where I am coming from technically: > > I’m self‑employed, working from Cidade Estrutural, and I paid for > my GPU and AI usage out of my own pocket. > My inspirations are very concrete: Apollo 11 engineering and code, > computers history, the game industry, and especially the demoscene > – for example, the famous early‑2000s FPS/demos that fit into tens > of kilobytes by storing procedures instead of raw assets. > That led directly to K3D’s procedural compression: store “how to > reconstruct” knowledge on GPU, rather than huge raw embedding > arrays (not Milton work, that was a happy coincidence). > > I also research a lot about learning methodology. > > I designed the training pipeline like teaching a child: we are > currently ingesting atomic knowledge (characters, punctuation, > math symbols) so we can later build up to words, phrases and texts > inside a coherent spatial memory. > > From that perspective, PTX is not a buzzword for me, it’s a > deliberate engineering choice. > > Hand‑written PTX kernels are rare because they are the GPU > equivalent of assembly: most people rely on cuBLAS/cuDNN and > high‑level frameworks. > > The reason DeepSeek’s work on a single PTX kernel attracted so > much attention is exactly that – very few teams are willing or > able to go down to that level to squeeze out performance and control. > > K3D intentionally pushes reasoning and memory operations into PTX > so the logic is fast, transparent and reproducible on consumer > hardware, not hidden in black‑box libraries. > > I also see AI as more than “text prediction”. > > The line of work on world models and video‑based predictive models > shows that modern systems are learning internal models of > environments, not just token sequences, and they already noticed > that the AI future is at least virtually embodied. > > K3D is my attempt to give those systems – and humans – a shared > spatial KR substrate: Houses, Rooms, Nodes, Doors, Galaxy, a Tablet. > > In your diagram, that vocabulary lives in the Domain Ontologies / > ODD space (with links to KR learning and reliability), and that is > the part I have been trying to contribute to this CG. > > Regarding today’s session and the demo: > Given where the implementation is right now, I do not have a > polished, self‑contained demo that meets the expectations you > outlined (state of the art, open vocabularies list, validated use > cases and a live system) ready for TPAC. > > We are still in the phase of training atomic knowledge and > integrating procedural compression; the viewer and spatial > structures exist, but I would rather not present an improvised > demo that doesn’t meet your standards or mine. > > So, to use our time respectfully, I propose the following for the > 10‑minute slot you offered: > > I use the time to explain “where I’m coming from”: > my background, the historical inspirations (Apollo, demoscene, > engineering, teaching), and how that led to a spatial KR > architecture that *overlaps* but *does not depend on this CG*. > > I summarize, very concretely, how the K3D spatial vocabulary maps > into your diagram: it is a domain ontology for spatial knowledge > environments (Houses, Rooms, Nodes, Doors, Galaxy, Tablet) in the > ODD space, with clear edges to KR learning and reliability. > > If you prefer to keep the slot only for a brief Q&A instead of a > full 10‑minute overview, I will of course respect that. > > My goal is simply to explain my position clearly once, within your > scope, and then let the group decide whether this spatial KR > perspective is useful for AI‑KR going forward. > > Thank you again for the apology and for the opportunity to clarify > these points. > > Best regards, > Daniel > > On 11/14/25 1:21 AM, Paola Di Maio wrote: >> Daniel >> I completely apologise for my tone and for suggesting enrolment >> in my courses >> >> This was intended ironically and resulting of my own frustration >> in reading your AI generated materials and responses >> >> I rephrase: it seems that you are not familiar with the spatial >> knowledge representation domain >> and I suggest you familiarise yourself with the learning >> resources available >> *it is true tho that if you need specific guidance from me you >> need to enrol in one of my courses >> >> >> I am glad to receive an email from you that sounds written by a >> human, expressing human concerns >> what about if in your 10 minutes slot today we discuss where you >> are coming from *that is what is colloquially referred to as >> coming from another planet, and you get the chance to air exactly >> all the points that you state in your email below >> PLUS give a demo >> and we can take things from there? >> >> I apologise sincerely for causing offense and once again thank >> you for stepping up and enabling this exchange >> >> P >>
Received on Friday, 14 November 2025 05:34:41 UTC