Re: LLMs and Agents usage in the CCG

po 13. 4. 2026 v 16:33 odesílatel Michael Herman (Trusted Digital Web) <
mwherman@parallelspace.net> napsal:

> Here's the grand experiment: the formal Parchment Programming Modeling
> Language (PPML) diagram below (aka parchment), over a large number of
> revisions and iterations, was used to produce this repository:
> https://github.com/mwherman2000/SVRN7
>
> [image: Image]
>
> The repository (code, extensive test cases, IETF draft specifications,
> design docs, Whitepaper, READEME.md, LICENSE.md, etc. etc.) was created
> from the dozens of versions of the above parchment and 100s of iterations
> over approximately 7-10 *days*.
>
> Yesterday was the first I had a complete solution to upload to GitHub.
> Today will be the first day I (manually) compile the code (which
> symbolically amounts to compliant an Intermediate Representation (IR)). In
> effect,
>
> Today, I'm undertaking the last step in a process of compiling the above
> PPML parchment (almost) directly into extensively documented, spec
> compliant, executable code.
>
> The implications are incalculable.
>

Nice job. Another example. I have created a fully working solid server with
DID compatibility and single sign-on, and several other protocols, in a few
months. It would have been several many man years (or man decades of work),
but I was able to get it working in a few months. In the right hands, LLMs
can accelerate standards work 100x.

https://github.com/JavaScriptSolidServer/JavaScriptSolidServer

[image: image.png]


>
> "More news at 11...",
> Michael Herman
> Chief Digital Officer
> Web 7.0 Foundation
>
> Get Outlook for Android <https://aka.ms/AAb9ysg>
> ------------------------------
> *From:* Manu Sporny <msporny@digitalbazaar.com>
> *Sent:* Monday, April 13, 2026 7:54:38 AM
> *To:* Juan Casanova <juan.casanova.undeceiver@gmail.com>
> *Cc:* Credentials Community Group <public-credentials@w3.org>
> *Subject:* Re: LLMs and Agents usage in the CCG
>
> On Mon, Apr 13, 2026 at 9:18 AM Juan Casanova
> <juan.casanova.undeceiver@gmail.com> wrote:
> > I have been reading this list for a couple of months, but this is the
> first time I participate.
>
> Welcome to the list, Juan. :) We're very happy to have you and your
> input on the important work being done here.
>
> > Thanks for your time. I find it funny because my tendency to long
> messages and being a first time participant might make others feel like I
> am the LLM here, arguing for disclosure of LLM use :P . No LLMs were used
> in making this email, for what it's worth.
>
> FWIW, your response felt strongly human to me... and I found myself
> nodding along in agreement with much of it.
>
> Amir wrote:
> > On the mailing list, keeping it human-only makes sense to avoid noise
> and maintain meaningful discussion. I would lean toward Option 3, with one
> addition: light disclosure of LLM assistance (for example, noting if it was
> used for structuring or language), so the group can apply appropriate
> scrutiny. LLMs and agents are tools we’ve built over time, and it’s
> important we continue to use them in line with their intended role—as
> assistants, not decision-makers—while keeping accountability firmly human.
>
> What Amir wrote above resonates strongly with me. As I'm sure with a
> number of you, my LLM use has been climbing steadily over the last
> several years. It was cute at first, but there are some things I'm
> doing with the various models today (mostly code / theory analysis and
> refactoring) that I would be unable to do on my own (on the scale at
> which I am doing it).
>
> There are some work products where I feel like the output is mostly
> mine (ideas, architecture, theory, etc.)... but there was very heavy
> LLM usage (perhaps more than I'm giving the LLM credit for). The lines
> are blurred for me... email is one place where I definitely do not use
> LLMs... but specs, and code are certainly becoming increasingly
> blurred and graphics/illustrations are an almost complete outsourcing
> to LLMs.
>
> I don't disclose that I use compilers, linters, macros, spell
> checkers, scripts, and code coverage tools... and I'm pretty sure
> we're going to eventually stop disclosing usage of LLMs as they
> increasingly exceed human capabilities. I think we're all concerned
> about others that outsource their thinking to LLMs, but perhaps that
> could be better than what we have today, which is some outsourcing
> their thinking to institutions that don't have their best interests at
> heart? Perhaps this is the dawn of personalized thinking? And yes, it
> is easy for that to slide into personalized coercion.
>
> ... but for now, I think I'm ok w/ mentioning when I use them when I
> think others need to be aware (mostly because I'm concerned about
> accidental slop, even though I've reviewed it multiple times, and I
> need other humans to help me keep what's produced in check). I also
> don't want others to feel what I feel when I realize I'm reading
> LLM-generated content without being warned -- it feels like a lie by
> omission; a minor betrayal -- and I have to reset the context in which
> I'm reading the work, usually going all the way back to the beginning
> and reading it knowing that it's LLM generated (which feels like it
> takes much more effort to catch the subtle errors that make the entire
> argument/architecture/theory fall apart).
>
> From a concrete standpoint, this means disclosing medium-to-heavy LLM
> usage in pull requests for specs and code. I'm not sure if I care if
> LLM usage is disclosed when clearing up spelling, grammar, and flow as
> long as the original content/concepts were written by a human.
>
> Just thinking out loud, not suggesting any particular direction --
> just mostly trying to get feedback from others so the Chairs can
> establish a coherent policy for the community.
>
> -- manu
>
> --
> Manu Sporny - https://www.linkedin.com/in/manusporny/
> Founder/CEO - Digital Bazaar, Inc.
> https://www.digitalbazaar.com/
>
>

Received on Monday, 13 April 2026 14:39:08 UTC