- From: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>
- Date: Mon, 13 Apr 2026 14:49:33 +0000
- To: Melvin Carvalho <melvincarvalho@gmail.com>
- CC: Manu Sporny <msporny@digitalbazaar.com>, Juan Casanova <juan.casanova.undeceiver@gmail.com>, Credentials Community Group <public-credentials@w3.org>
- Message-ID: <IA3PR13MB7541A4740210807753608526C3242@IA3PR13MB7541.namprd13.prod.outlook.com>
For kicks, in a new conversation, try uploading your diagram using the following prompt: "Using PPML techniques described here https://github.com/mwherman2000/SVRN7/blob/main/specs/draft-herman-parchment-programming-00.md, generate a working code base, draft a Whitepaper, README.md, etc. Then try it a second time with a formal legend similar to the one below on your diagram (this is the secret sauce): [Image] Cheers, Michael Herman Chief Digital Officer Web 7.0 Foundation Get Outlook for Android<https://aka.ms/AAb9ysg> ________________________________ From: Melvin Carvalho <melvincarvalho@gmail.com> Sent: Monday, April 13, 2026 8:38:50 AM To: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net> Cc: Manu Sporny <msporny@digitalbazaar.com>; Juan Casanova <juan.casanova.undeceiver@gmail.com>; Credentials Community Group <public-credentials@w3.org> Subject: Re: LLMs and Agents usage in the CCG po 13. 4. 2026 v 16:33 odesílatel Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net<mailto:mwherman@parallelspace.net>> napsal: Here's the grand experiment: the formal Parchment Programming Modeling Language (PPML) diagram below (aka parchment), over a large number of revisions and iterations, was used to produce this repository: https://github.com/mwherman2000/SVRN7 [Image] The repository (code, extensive test cases, IETF draft specifications, design docs, Whitepaper, READEME.md, LICENSE.md, etc. etc.) was created from the dozens of versions of the above parchment and 100s of iterations over approximately 7-10 *days*. Yesterday was the first I had a complete solution to upload to GitHub. Today will be the first day I (manually) compile the code (which symbolically amounts to compliant an Intermediate Representation (IR)). In effect, Today, I'm undertaking the last step in a process of compiling the above PPML parchment (almost) directly into extensively documented, spec compliant, executable code. The implications are incalculable. Nice job. Another example. I have created a fully working solid server with DID compatibility and single sign-on, and several other protocols, in a few months. It would have been several many man years (or man decades of work), but I was able to get it working in a few months. In the right hands, LLMs can accelerate standards work 100x. https://github.com/JavaScriptSolidServer/JavaScriptSolidServer [image.png] "More news at 11...", Michael Herman Chief Digital Officer Web 7.0 Foundation Get Outlook for Android<https://aka.ms/AAb9ysg> ________________________________ From: Manu Sporny <msporny@digitalbazaar.com<mailto:msporny@digitalbazaar.com>> Sent: Monday, April 13, 2026 7:54:38 AM To: Juan Casanova <juan.casanova.undeceiver@gmail.com<mailto:juan.casanova.undeceiver@gmail.com>> Cc: Credentials Community Group <public-credentials@w3.org<mailto:public-credentials@w3.org>> Subject: Re: LLMs and Agents usage in the CCG On Mon, Apr 13, 2026 at 9:18 AM Juan Casanova <juan.casanova.undeceiver@gmail.com<mailto:juan.casanova.undeceiver@gmail.com>> wrote: > I have been reading this list for a couple of months, but this is the first time I participate. Welcome to the list, Juan. :) We're very happy to have you and your input on the important work being done here. > Thanks for your time. I find it funny because my tendency to long messages and being a first time participant might make others feel like I am the LLM here, arguing for disclosure of LLM use :P . No LLMs were used in making this email, for what it's worth. FWIW, your response felt strongly human to me... and I found myself nodding along in agreement with much of it. Amir wrote: > On the mailing list, keeping it human-only makes sense to avoid noise and maintain meaningful discussion. I would lean toward Option 3, with one addition: light disclosure of LLM assistance (for example, noting if it was used for structuring or language), so the group can apply appropriate scrutiny. LLMs and agents are tools we’ve built over time, and it’s important we continue to use them in line with their intended role—as assistants, not decision-makers—while keeping accountability firmly human. What Amir wrote above resonates strongly with me. As I'm sure with a number of you, my LLM use has been climbing steadily over the last several years. It was cute at first, but there are some things I'm doing with the various models today (mostly code / theory analysis and refactoring) that I would be unable to do on my own (on the scale at which I am doing it). There are some work products where I feel like the output is mostly mine (ideas, architecture, theory, etc.)... but there was very heavy LLM usage (perhaps more than I'm giving the LLM credit for). The lines are blurred for me... email is one place where I definitely do not use LLMs... but specs, and code are certainly becoming increasingly blurred and graphics/illustrations are an almost complete outsourcing to LLMs. I don't disclose that I use compilers, linters, macros, spell checkers, scripts, and code coverage tools... and I'm pretty sure we're going to eventually stop disclosing usage of LLMs as they increasingly exceed human capabilities. I think we're all concerned about others that outsource their thinking to LLMs, but perhaps that could be better than what we have today, which is some outsourcing their thinking to institutions that don't have their best interests at heart? Perhaps this is the dawn of personalized thinking? And yes, it is easy for that to slide into personalized coercion. ... but for now, I think I'm ok w/ mentioning when I use them when I think others need to be aware (mostly because I'm concerned about accidental slop, even though I've reviewed it multiple times, and I need other humans to help me keep what's produced in check). I also don't want others to feel what I feel when I realize I'm reading LLM-generated content without being warned -- it feels like a lie by omission; a minor betrayal -- and I have to reset the context in which I'm reading the work, usually going all the way back to the beginning and reading it knowing that it's LLM generated (which feels like it takes much more effort to catch the subtle errors that make the entire argument/architecture/theory fall apart). From a concrete standpoint, this means disclosing medium-to-heavy LLM usage in pull requests for specs and code. I'm not sure if I care if LLM usage is disclosed when clearing up spelling, grammar, and flow as long as the original content/concepts were written by a human. Just thinking out loud, not suggesting any particular direction -- just mostly trying to get feedback from others so the Chairs can establish a coherent policy for the community. -- manu -- Manu Sporny - https://www.linkedin.com/in/manusporny/ Founder/CEO - Digital Bazaar, Inc. https://www.digitalbazaar.com/
Attachments
- image/png attachment: 1000024686.png
- image/png attachment: image.png
- image/png attachment: 1000024748.png
Received on Monday, 13 April 2026 14:49:46 UTC