- From: Mahee Kirindigoda <maheekirindigoda@gmail.com>
- Date: Sat, 26 Jul 2025 15:03:13 +0530
- To: Timothy Holborn <timothy.holborn@gmail.com>
- Cc: Human-centric AI <public-humancentricai@w3.org>
- Message-ID: <CADSPx91R=v1aMQro+ZYxef5ywmMyWORbWLdTa9odLHDmEiq-pA@mail.gmail.com>
Hi Tim, Thanks for raising this pertinent question about defining the ontology for 'vibe coding' and the attribution of work, especially when involving multiple AI agents. It's a fascinating area that I've also been exploring. From my experience with 'vibe coding' using various LLMs like Gemini Pro, ChatGPT, Grok, and DeepSeek, I've found it to be incredibly productive, often more so than guiding a human in certain tasks. The efficiency and speed with which these models can generate code snippets or iterate on ideas is truly remarkable. However, a common issue I've encountered arises right after a prompt understanding issue. It seems that once an LLM misinterprets a prompt, it tends to retain a 'memory' of that failed prompt. This can lead to a persistent looping of the initial error, often compounded by hallucination, rather than the model course-correcting. It's not quite an echo chamber effect, but rather a tendency to re-introduce the same mistake or a variation of it, even when subsequent prompts aim to guide it back on track. This makes debugging and refinement a more complex process than it might initially appear, as you're not just dealing with the current prompt but also the lingering influence of past misinterpretations. I believe this observation might be relevant to your query about defining 'best practice' for AI agent involvement and how to delineate 'who did what'. Understanding how these models "remember" and potentially "loop" errors could inform how we structure interactions and define accountability in a multi-agent coding environment. I look forward to hearing other perspectives on this and how it might intersect with existing version control systems like Git. Best regards, Mahee On Sat, 26 Jul 2025, 14:55 Timothy Holborn, <timothy.holborn@gmail.com> wrote: > So, question is, > > How / what ontology should be defined, to better describe 'vibe > coding'... > > I've been experimenting - fwiw ; it's been great! generally... > > my experience has been, that no single model can 'do the job', mostly, > they do parts, and you've got to use different 'models' (llms, etc.) to get > the 'job' done. > > also depends on how complex the 'job' is... but, then you get to tools > like respec, where its important to honourably define - who did what, and > if users are 'vibe coding', then, they're not actually the primary > 'software developer'... rather, they're providing instructions... > > a bit like, defining the text in a microsoft word document; rather than, > defining the code of a microsoft word document.... (or indeed also, the > application?) > > So, > > whilst i look forward to providing an update about WIP (works in > progress); i thought, i'd prompt the list, with the query - what do you > think, how should the definitions of 'who did what' apply to AI agents, > particularly when there's alot of them involved; and, how could this > in-turn be defined as some sort of respec doc, to define the > characteristics of what might be defined as 'best practice'.. > > I suspect it may intersect with git, and other elements; but, I thought > I'd prompt others to provide their view, in case that's going to give > results! > > fwiw; responses on the list are preferred... at least, for me.. gives a > historical view of how things happened overtime. i've lost alot of the > records from skype or other platforms, some most involved in the most > important works, now deceased... anyhow. i also understand the > importance of being able to reach-out, people are welcome to do so... but, > as noted, crafting considerations into something that can be put onto the > list - totally preferred (where appropriate), for me 🙏 > > Tim.h. > > >
Received on Saturday, 26 July 2025 10:06:25 UTC