RE: The Slopification of the CCG

RE: @Michael Herman (Trusted Digital Web)<mailto:mwherman@parallelspace.net> if you get the use case for the sns method, I saved a couple of domains for you a couple weeks ago. (.sol), ping me if you want them. .Sol domains are permanent. ( never expire)

Thank you, I think I’m the only person who loves domains, copyrights, trademarks, and AI. 😉

With respect to your Mesh project, we’re taking a different route: Web 7.0 DIDNET …supporting Identity DID and Locator DID addresses directly on top of IP.  We’re started with an update to BSD Sockets that enables DIDs as a native address format – then the magic really begins when we can create global, virtual DID-native, DID-exclusive DID/IP networks.

Michael Herman
Chief Digital Officer
Web 7.0 Foundation

From: Eduardo C. <e.chongkan@gmail.com>
Sent: Saturday, April 18, 2026 1:25 AM
To: Amir Hameed <amsaalegal@gmail.com>; Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>
Cc: Marcus Engvall <marcus@engvall.email>; W3C Credentials Community Group <public-credentials@w3.org>
Subject: Re: The Slopification of the CCG

Hi Amir,

100% agree with you as well. Also important to note that the persons in this list are probably the only humans on earth that understand what we are talking about, in my case, I keep reviewing my ideas and architecture, solutions etc, with Gemini + Claude. And I included LLM.txt files in the repos so others could also have it easier using a model to explain or work with the repos.

Speaking of ideas shaped and built with LLMs -- one of the main challenges is to have deterministic gates and checks in place cause the models are not good at that. So a lot of CI scripts to check that aligment and manual supervision, linting, testing etc.

I have been really wanting to get feedback from the community on some ideas that interface with each other and together they resolve certain use cases: -- Like I am not trying to showcvase my project here but really wishing to get some sort of questioning from the community.

  1.  ID-Wallet-Adapter: A discovery and verification layer for credential wallet browser extensions — like EIP-6963 but for W3C identity wallets (UX and Interops). ID-Wallet-Adapter GitHub<https://www.google.com/url?source=gmail&sa=E&q=https://github.com/Attestto-com/id-wallet-adapter>
  2.  A Browser Extension that leverages the id-wallet-adapter protocol — Login (DID-based, no passwords), Signing (Digital Signature), and Portable User Preferences (UX settings and others that sites can read from the user extension based on the did they choose to login with, or universal for all). Browser Extension GitHub<https://www.google.com/url?source=gmail&sa=E&q=https://github.com/Attestto-com/attestto-creds-extension> (currently fixing a bug )
  3.  did:sns: A DID method that maps human-readable aliases to W3C DID Documents. Not a "blockchain identity" — it's a resolution layer where the alias is the identity and the key can rotate underneath. Alias ≠ key, identity ≠ public key, alias ≠ single issuer. W3C DID Extensions PR #674. did:sns Spec GitHub<https://www.google.com/url?source=gmail&sa=E&q=https://github.com/Attestto-com/did-sns-spec>
  4.  did:pki: A read-only DID method that bridges national PKI hierarchies (BCCR, ICP-Brasil, FNMT, etc.) to the W3C DID ecosystem — deterministic X.509-to-DID derivation. No registration — if the CA cert exists it already has a DID. The architecture uses did:sns to anchor trust roots on-chain (permanetly, .sol domains are lifetime owned) rather than depending on domain infrastructure (avoiding the fragility of did:web). Live resolver for Costa Rica:
      https://resolver.attestto.com/1.0/identifiers/did:pki:cr:raiz-nacional  -- W3C DID Extensions PR #697. did:pki Spec GitHub<https://www.google.com/url?source=gmail&sa=E&q=https://github.com/Attestto-com/did-pki-spec>
  5.  <attestto-verify>: W3C Web Components for PDF signature verification — one `<script>` tag, drop a PDF, see results. No framework, no backend, no upload. Uses did:pki for cross-border trust resolution. attestto-verify GitHub<https://www.google.com/url?source=gmail&sa=E&q=https://github.com/Attestto-com/attestto-verify> Live Demo<https://www.google.com/url?source=gmail&sa=E&q=https://verify.attestto.com>
  6.  DID Login: Passwordless authentication via the extension — site requests identity proof, wallet signs challenge, done. Interfaces with existing SSO/OAuth flows so sites don't need to rip out their auth stack. Built on id-wallet-adapter discovery.
  7.  Attestto Mesh: P2P encrypted data layer for sovereign identity — identity credentials survive infrastructure failure (offline-first, gossip protocol, Solana anchoring). Shares resilience goals with Frank Sanborn's Social Fabric (CCG Atlantic, Apr 14) — our focus is specifically the identity state layer. https://github.com/Attestto-com/attestto-mes<https://github.com/Attestto-com/attestto-mesh>h

Online Verifier & Signer: https://verify.attestto.com/ ( vs Adobe reader showing invalid for these )
  8.  [cid:image001.png@01DCCEEB.3DF5AF20]
Desktop Mesh Node (This is actually also a Citizens App I am WIP)
[cid:image002.png@01DCCEEB.3DF5AF20]
[cid:image003.png@01DCCEEB.3DF5AF20]
[cid:image004.png@01DCCEEB.3DF5AF20]


@Michael Herman (Trusted Digital Web)<mailto:mwherman@parallelspace.net> if you get the use case for the sns method, I saved a couple of domains for you a couple weeks ago. (.sol), ping me if you want them. .Sol domains are permanent. ( never expire )


Regards,
--
Eduardo Chongkan


On Fri, Apr 17, 2026 at 11:48 PM Amir Hameed <amsaalegal@gmail.com<mailto:amsaalegal@gmail.com>> wrote:
Hi Eduardo,


I believe a thread on LLMs and agents in the CCG was already initiated about a week ago, and many of us shared thoughtful perspectives on their use. If we draw a clear distinction between incubation and proposal stages of ideas, there’s no real issue in leveraging these tools as long as intent and end goals remain sound.

Ideas are inherently raw in the beginning. They’re rarely acceptable at first, regardless of whether they’re shaped with or without intelligent tools. What ultimately matters is whether an idea solves the intended problem with a well-defined and provable approach.

There are two dimensions here:
First, how the idea is communicated, which can always be refined or made more persuasive.
Second, whether it actually works in practice. That’s where real building begins. No matter how many LLMs you use, you can only build what you truly understand.

Participating in a meeting doesn’t restrict how you get there, you can walk, drive, or take any route that works. Similarly, using tools to reach better outcomes shouldn’t be discouraged. What matters is that we collectively own our shared goals and move toward them using methods that improve productivity and efficiency.

At this point, resisting tools that have already demonstrated capabilities comparable to passing the Turing Test isn’t particularly constructive. The focus should instead be on responsible and effective use.

Regards
Amir Hameed Mir


On Sat, 18 Apr 2026 at 9:09 AM, Eduardo C. <e.chongkan@gmail.com<mailto:e.chongkan@gmail.com>> wrote:
"I find it difficult to trust a contribution in this group if it has been generated by an LLM"

A- I wonder how everyone can tell if something was written by an LLM? Aside of the now infamous "--" here and there that it uses, how can you guys tell? ( how do you know it is not a grammaraly plugin? )
B- Also wondering if the embedded gemini would detect if an email or text was generated by an LLM. And more importantly, detect slope in that email or content. e.g. I normally use 2 different LLMs to do manual adversary checks on each other outputs and analysis, Gemini + Claude, and they usually find improvements or catches, and I also find deviations and correct the alignment.
C- Most slope happens when one is researching or asking for things that are not in the model itself. E.g. you ask for certain uncommon thing and the models—all of them—keep levitating and pointing towards what the probability says they should answer. One needs to be aware of that.

BTW, I agree with Michael Herman 100%.
--
Eduardo Chongkan


On Fri, Apr 17, 2026 at 4:43 PM Marcus Engvall <marcus@engvall.email<mailto:marcus@engvall.email>> wrote:
Hi all,

I have been a passive observer of the CCG and have found the discussions in this group to have been remarkably considered, professional, and above all else clear in both intent and direction. I hesitate to comment on the current state of the mailing list as my tenure is minuscule compared to some of my brilliant co-participants, but the quality of recent contributions have compelled me to share some thoughts.

Standards work is fundamentally a rigorous process of deriving a synthesis of human knowledge and judgement through healthy debate and, particularly in this group, decentralised knowledge discovery. It is precisely the provenance of consideration that establishes the trust basis necessary for the voluntary adoption of standards. Without trust, there is no standard. It follows then that preserving the integrity of the standardisation process is existential for any group working on standards.

AI has improved the accessibility of standardisation to a larger and more diverse group of participants which is incredibly valuable for standardisation and should be encouraged. However, it should not come at the cost of compromising the integrity of the process itself, something I fear is happening in this group.

Many recent contributions on this mailing list bear the hallmarks of LLM generation. To be clear, it is my view that there is nothing wrong with using AI agents to assist with research, proofreading, and other similar tasks. I use these tools every day professionally and their value is undeniable. That said, they are not replacements for human judgement, and this is something I think shared by most people in this group.

I find it difficult to trust a contribution in this group if it has been generated by an LLM, and it is becoming increasingly intractable to follow discussions as they seem to inevitably degenerate to chatbots arguing with each other. Inferring the direction of standardisation, which has a direct impact on commercial and technical planning, becomes impossible. I find it quite ironic that the recent thread discussing LLMs and agents in the CCG contains responses that suggest that they themselves have been generated by an AI. If anything, I think it is proof enough of how acute this problem is.

There is also the somewhat primal and adversarial aspect of evaluating human judgement and reaching consensus. A debate is a contest between two humans arguing for their position, which presupposes real agency and, well, humanity. An AI agent is not, and will never be, a real human - and nobody wants to credibly evaluate the arguments of a robot.

I am not sure what the solution is, but I feel that the effects of this are severe and will almost certainly discourage participants from contributing, the downstream consequences of which I think are clear to everyone.

I would like to close out this lengthy email with this: I think a serious discussion should be opened to consider migrating to a discussion channel that is more resistant to AI agents, or at least consensus be formed to institute and enforce a strict code of conduct with zero-tolerance for AI slop. Openness is important, and exclusionary dynamics must be avoided to the extent possible, but the integrity of the standardisation process and the important work done in this group depends on humanity and not artificiality.

Sincerely,

--
Marcus Engvall

Principal—M. Engvall & Co.
mengvall.com<http://mengvall.com>

Received on Saturday, 18 April 2026 10:24:52 UTC