- From: Manu Sporny <msporny@digitalbazaar.com>
- Date: Mon, 13 Apr 2026 09:54:38 -0400
- To: Juan Casanova <juan.casanova.undeceiver@gmail.com>
- Cc: Credentials Community Group <public-credentials@w3.org>
On Mon, Apr 13, 2026 at 9:18 AM Juan Casanova <juan.casanova.undeceiver@gmail.com> wrote: > I have been reading this list for a couple of months, but this is the first time I participate. Welcome to the list, Juan. :) We're very happy to have you and your input on the important work being done here. > Thanks for your time. I find it funny because my tendency to long messages and being a first time participant might make others feel like I am the LLM here, arguing for disclosure of LLM use :P . No LLMs were used in making this email, for what it's worth. FWIW, your response felt strongly human to me... and I found myself nodding along in agreement with much of it. Amir wrote: > On the mailing list, keeping it human-only makes sense to avoid noise and maintain meaningful discussion. I would lean toward Option 3, with one addition: light disclosure of LLM assistance (for example, noting if it was used for structuring or language), so the group can apply appropriate scrutiny. LLMs and agents are tools we’ve built over time, and it’s important we continue to use them in line with their intended role—as assistants, not decision-makers—while keeping accountability firmly human. What Amir wrote above resonates strongly with me. As I'm sure with a number of you, my LLM use has been climbing steadily over the last several years. It was cute at first, but there are some things I'm doing with the various models today (mostly code / theory analysis and refactoring) that I would be unable to do on my own (on the scale at which I am doing it). There are some work products where I feel like the output is mostly mine (ideas, architecture, theory, etc.)... but there was very heavy LLM usage (perhaps more than I'm giving the LLM credit for). The lines are blurred for me... email is one place where I definitely do not use LLMs... but specs, and code are certainly becoming increasingly blurred and graphics/illustrations are an almost complete outsourcing to LLMs. I don't disclose that I use compilers, linters, macros, spell checkers, scripts, and code coverage tools... and I'm pretty sure we're going to eventually stop disclosing usage of LLMs as they increasingly exceed human capabilities. I think we're all concerned about others that outsource their thinking to LLMs, but perhaps that could be better than what we have today, which is some outsourcing their thinking to institutions that don't have their best interests at heart? Perhaps this is the dawn of personalized thinking? And yes, it is easy for that to slide into personalized coercion. ... but for now, I think I'm ok w/ mentioning when I use them when I think others need to be aware (mostly because I'm concerned about accidental slop, even though I've reviewed it multiple times, and I need other humans to help me keep what's produced in check). I also don't want others to feel what I feel when I realize I'm reading LLM-generated content without being warned -- it feels like a lie by omission; a minor betrayal -- and I have to reset the context in which I'm reading the work, usually going all the way back to the beginning and reading it knowing that it's LLM generated (which feels like it takes much more effort to catch the subtle errors that make the entire argument/architecture/theory fall apart). From a concrete standpoint, this means disclosing medium-to-heavy LLM usage in pull requests for specs and code. I'm not sure if I care if LLM usage is disclosed when clearing up spelling, grammar, and flow as long as the original content/concepts were written by a human. Just thinking out loud, not suggesting any particular direction -- just mostly trying to get feedback from others so the Chairs can establish a coherent policy for the community. -- manu -- Manu Sporny - https://www.linkedin.com/in/manusporny/ Founder/CEO - Digital Bazaar, Inc. https://www.digitalbazaar.com/
Received on Monday, 13 April 2026 13:55:19 UTC