- From: Alan Karp <alanhkarp@gmail.com>
- Date: Thu, 23 Apr 2026 11:10:10 -0700
- To: "Eduardo C." <e.chongkan@gmail.com>
- Cc: Moses Ma <moses.ma@futurelabconsulting.com>, NIKOLAOS FOTIOY <fotiou@aueb.gr>, Amir Hameed <amsaalegal@gmail.com>, Adrian Gropper <agropper@healthurl.com>, Kyle Den Hartog <kyle@pryvit.tech>, Juan Casanova <j.casanova@hw.ac.uk>, Steve Capell <steve.capell@gmail.com>, Melvin Carvalho <melvincarvalho@gmail.com>, Marcus Engvall <marcus@engvall.email>, Manu Sporny <msporny@digitalbazaar.com>, Public-Credentials <public-credentials@w3.org>
- Message-ID: <CANpA1Z1y7aOmytiyy=wdDL2Tcj2LBX8ccO_5RczSY84VfUfkUg@mail.gmail.com>
The default AI summary of your reply provided by gmail is
- Marcus initiated discussion on LLM-generated content ("slop")
degrading CCG integrity.
- Community debated AI usefulness vs. human judgment; several suggested
stricter conduct.
- You suggested long posts, including AI-generated ones, should start
with a TLDR.
which doesn't capture your point. Perhaps I should be using better tools!
At any rate, reading a one or two line tldr is easier than cutting,
pasting, asking for a summary, and reading it. The other step can follow
if you find the tldr interesting.
--------------
Alan Karp
On Thu, Apr 23, 2026 at 10:59 AM Eduardo C. <e.chongkan@gmail.com> wrote:
> [image: image.png]
>
> That is what I think/expect most people would do when they see a long
> text, or thread. I prefer copy pasting into the CLI, than clicking the tab
> on my right. That is what I do, and what I expect people to do in 2026,
> especially when introducing a concept, code, idea etc.
>
> [image: image.png]
>
> I think everyone will try to protect their own reputation and read the
> responses and code proposed in a forum like this one.
>
> *My recomendation to everyone: do the same, ask Gemini or Claude, or Chat
> GPT what a thread is about, to catch up, to stay informed. *I even asked
> Claude to check all the emails in this thread from the last 90 days to to
> see if anyone was talking about something in particular etc. /// If you are
> not using it for that, and don't use it to edit the responses, what are you
> guys using it for then?
>
> For responses that require structure and proper explanation, I see no
> reason to use Grammarly ( not even IA ), Gemini, Claude or any other tool
> to help with redaction. Again, if someone thinks the audience and email are
> important enough to have the LLM help redact it, that person probably
> doesn't want his/her reputation to go to down because of the same email.
>
> As per the Sybil Attacks, that belongs to a different thread but, in
> short, it comes down to trust: would you trust self-signed DIDs and VCs in
> your system? or a more trustable and unique one that can't be spawned that
> easily, or require MFA
>
> Regards,
>
> --
> Eduardo Chongkan
>
>
>
> On Thu, Apr 23, 2026 at 11:29 AM Alan Karp <alanhkarp@gmail.com> wrote:
>
>> Not to denigrate anyone's posts, including this one, but one complaint
>> seems to be the length of posts contributed by AI. I find it ironic that
>> many such posts are quite long. I personally find that content
>> interesting, but perhaps, in the interest of people with less time to read
>> than I, any post longer than a paragraph or two should start with a tldr.
>>
>> --------------
>> Alan Karp
>>
>>
>> On Thu, Apr 23, 2026 at 10:20 AM Moses Ma <
>> moses.ma@futurelabconsulting.com> wrote:
>>
>>> Nikos,
>>>
>>> I wanted to applaud your candor, which was refreshingly
>>> straightforward. However, I urge everyone to adhere to the unspoken goal of
>>> inclusivity here. We need to be peacemakers.
>>>
>>> Anyway, your post primed the pump of ideas.
>>>
>>> 1) A small experiment I ran recently showed ~25% of people in this
>>> forum can’t reliably distinguish LLM output from human writing. So some of
>>> what gets labeled “AI slop” is actually just perception. But that’s almost
>>> beside the point here. The real issue isn’t AI—it’s verbosity as a
>>> strategy.
>>>
>>> 2) Standards groups generally follow a few structural dynamics:
>>>
>>> - inclusion > exclusion
>>> - visibility = influence
>>> - language = control
>>>
>>> What’s changed is that AI has reduced the cost of writing. So
>>> individuals who were already inclined to, ah, “over-contribute” can now scale
>>> that behavior, flooding the channel with low-signal, self-promotional,
>>> or tangential content. These people are like the blowhards at a company who
>>> believe that talking really loud and constantly mansplaining is a success
>>> strategy. This is not the kind of leadership we need in the 21st century.
>>>
>>> The failure mode isn’t just annoyance—it’s attention capture by volume,
>>> where a few verbose participants degrade signal-to-noise for everyone else.
>>> What we really need is an AI with a *bloviation sensor*. Most of us do
>>> this internally by simply not reading certain posts, until we’ve had enough
>>> and lash out. Then the bloviator is justified in feeling attacked.
>>>
>>> Therefore, instead of debating tools or reputation, it may be more
>>> productive to consider the possibility of placing lightweight guardrails on
>>> contribution quality:
>>>
>>> - Contribution caps per cycle (forces prioritization)
>>> - One idea per message (no multi-topic dumps or longwinded responses
>>> like this one)
>>> - Editorial compression rights (chairs or AI can edit and summarize
>>> without loss of weight)
>>> - Track signal-to-noise to reward reputation for increased caps
>>> (what actually survives into the draft)
>>>
>>> Something like this would get us out of policing individuals or tools,
>>> and back to protecting the quality of the work.
>>>
>>> 3) I think that in a few years, people who refuse to use
>>> LLMs—essentially “artisanal writers”—will seem as quaint as luddites who
>>> refuse to use Google. I added the em dashes to show that a human
>>> generated response can still use em dashes, they are not the six fingers of
>>> LLM writing. I like them because they force the mind to “slow the breath”
>>> while reading.
>>>
>>> To wrap up, the happy ending I’d love to see is something likely
>>> impossible. My preference is a new kind of process that nurtures growth, by
>>> encouraging the hesitant to find their voice, novices to get up to speed
>>> faster, and the emergence of greater self-awareness by the bloviators.
>>>
>>> I’m actually working on a Web 89.0 vision of this vision (haha)….
>>>
>>> My incubator is building a “stealth-ish mode” startup called EmergentYOU
>>> with the goal of creating something that could provide a labor transition
>>> cushion for the AI era—using longitudinal coaching, hyperpersonalized
>>> learning pathways, and an AI career co-pilot to continuously align people
>>> with opportunity. It converts disruption into mobility by linking skills,
>>> employers, and outcomes in a closed-loop system that compounds human
>>> potential over time. We’ll announce the EmergentYOU concept at Human
>>> Tech Week in San Francisco next month.
>>>
>>> Anyway, I’ve been thinking about we’re extending EmergentYOU into
>>> EmergentUS, to support teams: an intelligence layer designed to
>>> increase group cohesion, reduce participation disparity, and enhance group
>>> consonance. The system nudges quieter participants to contribute,
>>> modulates dominant voices, and elevates high-signal input—creating
>>> balanced, adaptive dialogue and measurably stronger collective performance.
>>> However, the real issue is that the entire team would need to agree to
>>> undergo the process. If there is interest in piloting EmergentUS in an
>>> SDO context… let me know.
>>>
>>> – Moses
>>>
>>>
>>> PS, if you’re in the SF Bay Area and would like to attend our event at
>>> Human Tech Week… let me know too.
>>>
>>>
>>> <moses@nureon-eda.ai>
>>>
>>> On Apr 23, 2026 at 12:55 AM, <NIKOLAOS FOTIOY <fotiou@aueb.gr>> wrote:
>>>
>>> Hi all,
>>> I think we are losing the context here. The problem is that certain
>>> perfectly identifiable individuals spam the list with mostly meaningless,
>>> self-promotional content. For example, every couple of messages I receive
>>> some web 7.0 irrelevant, non sense. AI tools have just made their job
>>> easier to generate content. Blaming AI tools is just a polite way to say to
>>> those individuals “please stop you are creating too much noise”
>>>
>>> Best,
>>> Nikos
>>>
>>> 23 Απρ 2026, 10:36 πμ, ο χρήστης «Amir Hameed <amsaalegal@gmail.com>»
>>> έγραψε:
>>>
>>>
>>> Hi Adrian,
>>>
>>> I do not think the concern is about restricting the use of tools. People
>>> will use whatever tools are available to them—that’s inevitable.
>>>
>>> The issue is that reputation alone is not a strong enough primitive for
>>> systems that aim to operate at scale and across jurisdictions.
>>>
>>> In distributed environments, we typically rely on properties like:
>>>
>>> - verifiable provenance
>>> - non-repudiation
>>> - integrity of authorship
>>>
>>> These are not about limiting expression, but about ensuring that
>>> contributions can be evaluated independent of the individual’s perceived
>>> credibility.
>>>
>>> Saying “my reputation will suffer if I’m wrong” assumes:
>>>
>>> 1. reputations are consistently observable across contexts, and
>>> 2. reputational consequences are sufficient deterrent
>>>
>>> In practice, neither assumption holds reliably—especially in global,
>>> asynchronous systems.
>>>
>>> On enforcement: Global enforcement is not realistic. That’s precisely
>>> why systems tend to push guarantees down to verifiable layers rather than
>>> relying on behavioral expectations at the application layer.
>>>
>>> So perhaps the problem is not tool usage vs. responsibility, but:
>>>
>>> how do we make authorship and intent more verifiable without
>>> constraining participation?
>>>
>>> Regards
>>> Amir Hameed
>>>
>>> On Thu, 23 Apr 2026 at 12:15 PM, Adrian Gropper <agropper@healthurl.com>
>>> wrote:
>>>
>>>> It’s fundamentally unfair to restrict my use of technology if I’m
>>>> willing to take full responsibility for the posting. My reputation should
>>>> suffer just as much if a post offends regardless of what tools I may have
>>>> used.
>>>>
>>>> The problem seems to be that we have no way to enforce human
>>>> responsibility.
>>>>
>>>> As I see it, this is the only problem. I wish we were discussing
>>>> solutions.
>>>>
>>>> Adrian
>>>>
>>>> On Tue, Apr 21, 2026 at 9:04 AM Amir Hameed <amsaalegal@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>> We are discussing decentralised standards on a centralised email
>>>>> mailing list which is open to receive anything , it worked earlier because
>>>>> there was a limited capability a user had in terms of what they could
>>>>> research, type an email, structure it well and then send it to the mailing
>>>>> list, we had very few people who really were willing to put their work and
>>>>> time and help develop standards , few years back the same user has been
>>>>> handed over a tool where he can write a sentence and get multiple
>>>>> paragraphs answer that too structured in a intelligent way but may not be
>>>>> factual, it’s obvious that users who ever wished to write an email to the
>>>>> mailing list but could not do that due to lack of both energy to research ,
>>>>> draft and put it forward for discussion might think of using these tools to
>>>>> overcome that barrier to entry, it’s similar to industry revolution, there
>>>>> was a time when only elite could afford a car because there was no assembly
>>>>> line and it was done with hands manually , once we had assembly lines
>>>>> anyone could buy a car if they had money.
>>>>>
>>>>> Our current technology has reached another assembly line moment, this
>>>>> time it’s not cars but human skills, reasoning , and information systems.
>>>>> So this points us to something deeper and that is we need to rethink the
>>>>> entire process now, patching doesn’t always help like Kyle said ,
>>>>> reputation is not helpful in open ecosystems , we may have to elevate the
>>>>> criteria of what is valuable once intelligence and skills become a
>>>>> commodity and we need to think of humans as artists in the industrial
>>>>> world. Technology is not always the only answer , before we decide anything
>>>>> , let’s step back and rethink how the whole thing has changed ever since
>>>>> intelligence became commodity and generative tools became digital
>>>>> replacement of human skills. We may not have the mailing list itself in
>>>>> future , transition period is always chaotic and we collectively navigate
>>>>> it, I strongly believe for a better solution we need to rethink and come up
>>>>> with some fresh perspectives like verifiable provenance, proof of
>>>>> expertise, proof human , otherwise human signal will drown in this
>>>>> asymmetry.
>>>>>
>>>>>
>>>>> PS: it’s written by me no tool was used in this except the mail itself
>>>>> , It took me few more minutes but it’s worth it
>>>>>
>>>>> Regards
>>>>>
>>>>>
>>>>> On Tue, 21 Apr 2026 at 11:27 AM, Kyle Den Hartog <kyle@pryvit.tech>
>>>>> wrote:
>>>>>
>>>>>> Reputation systems work well as a heuristic metric when you’re
>>>>>> operating in high re-interaction environments. That’s not really the case
>>>>>> on the Web because of its openness properties where it's easy to build up
>>>>>> and spend down identities in an automated fashion. It's made even easier
>>>>>> with LLMs now too.
>>>>>>
>>>>>> For example, on this mailing list spammers could form new emails in
>>>>>> seconds and form new identities to continue their attacks. If you set up a
>>>>>> guard to prevent it you've now accepted the tradeoff of reduce openness and
>>>>>> entered a cat and mouse game at the same time. There are discourse forums
>>>>>> (polkadot and ZCash are 2 examples where I've encountered this) that have
>>>>>> these techniques built in where you can only post once you’ve built up a
>>>>>> reputation. They have specific threads that allow people with low
>>>>>> reputation to engage and then you earn reputation over time. This comes
>>>>>> with the tradeoff of reducing the openness of the system in exchange for a
>>>>>> higher bar of entry. Maybe a poster has something legitimate to add to the
>>>>>> conversation, but because they didn't build their reputation up enough they
>>>>>> can't contribute. With automation like LLMs given to attackers these days,
>>>>>> it's producing an asymmetric attack surface and reverting the solution more
>>>>>> towards option one (Dark Forest theory - retreat to safe communication
>>>>>> channels).
>>>>>>
>>>>>> Another example where we're dealing with these sorts of low value
>>>>>> sybils is in Brave's hackerone bug bounty programs. There's evidence[1]
>>>>>> from BugCrowd this could be security vendors using this to gather training
>>>>>> data, but it also simply could be someone operating out of a lower wage
>>>>>> country where one bug bounty report can be worth a month's salary or more.
>>>>>> So they're incentivized to use an LLM to generate new identities on the
>>>>>> fly, spam bug bounty programs, and if their signal degrades too much they
>>>>>> drop and swap them.
>>>>>>
>>>>>> Additionally, I’m not sure how much you’ve been following the Web3
>>>>>> and public goods funding/DAO spaces, but they’ve actually been relying on
>>>>>> these identity credential systems as a sybil resistance mechanism for a bit
>>>>>> now. While there’s been mild success shown, the system over time has had to
>>>>>> add capabilities to address different attacks that have been conducted. For
>>>>>> example, Gitcoin Grants 24 saw a 60% reduction in sybil attack influence
>>>>>> from their GG23 round[2]. They’re the most widely deployed system that I’ve
>>>>>> seen trying to actively go down the route of identity based protections for
>>>>>> Sybil attacks and spam. Worth a look for you at least but it's also worth
>>>>>> pointing out they're producing a system that structurally still faces the
>>>>>> problem as long as the incentives for conducting the attack are still high
>>>>>> enough ($1.8 million dollars was given out in GG24). For their system they
>>>>>> rely on over 20 different potential signals including government IDs,
>>>>>> biometrics, social signals, and financial signals (Binance accounts which
>>>>>> require KYC)[3]. Even then, people are still successfully conducting
>>>>>> attacks against this system and as more systems are built on the same
>>>>>> identity credential based sybil resistances (aka the reputation system atop
>>>>>> it) the value of conducting a sybil attack grows because it can be
>>>>>> repurposed across multiple systems.
>>>>>>
>>>>>> There's 2 other deployed identity credential systems that have also
>>>>>> been working on this problem as well in the Web3 space with some issues.
>>>>>> Idena[4] and Worldcoin[5] have fallen susceptible to some form of Sybil
>>>>>> attacks also. From what I've seen, people are conducting "puppeteer
>>>>>> attacks" where one person "puppets" many people who have digital IDs to
>>>>>> coordinate in the system and conduct attacks. These typically occur
>>>>>> through an attacker paying for some action to be taken in order to conduct
>>>>>> the attack. Again, these attacks are usually successful because they're
>>>>>> operating out of lower wage countries where the seemingly smaller amount of
>>>>>> money paid makes the attack worth it.
>>>>>>
>>>>>> The point here is that attaching reputation systems onto this means
>>>>>> you're in for a attack surface that has historically struggled to keep up.
>>>>>> I'm not convinced that an email list is ready to deal with this let alone
>>>>>> technology built through a standardization process that takes years to
>>>>>> iterate on. Especially when the human(s) who are participating is actively
>>>>>> coordinating with agents to conduct the spam or sybil attacks. So yeah,
>>>>>> that's why I'm not really convinced identity credentials are going to be
>>>>>> that useful. I'd be happy to be wrong, but what I'm seeing both in terms of
>>>>>> real world adoption as well as attacks I've had to deal with (we've seen
>>>>>> these sybil attacks against other systems in Brave too) identity
>>>>>> credentials only go so far in solving the problem and they come with
>>>>>> tradeoffs that normally aren't worth it.
>>>>>>
>>>>>> Here's some links for the citations made above as well.
>>>>>> [1] Bugcrowd:
>>>>>> https://www.bugcrowd.com/blog/bugcrowd-policy-changes-to-address-ai-slop-submissions/
>>>>>> [2] Gitcoin reduces attacks:
>>>>>> https://gitcoin.co/research/quadratic-funding-sybil-resistance
>>>>>> [3] Gitcoin Signals:
>>>>>> https://support.passport.xyz/passport-knowledge-base/stamps/how-do-i-add-passport-stamps/the-government-id-stamp
>>>>>> [4] Idena:
>>>>>> https://stanford-jblp.pubpub.org/pub/compressed-to-0-proof-personhood/release/5
>>>>>> [5] Worldcoin:
>>>>>> https://www.dlnews.com/articles/regulation/singapore-officials-warns-against-worldcoin-account-trading/
>>>>>>
>>>>>> -Kyle
>>>>>> -------- Original Message --------
>>>>>> On Tuesday, 04/21/26 at 05:16 Casanova, Juan <J.Casanova@hw.ac.uk>
>>>>>> wrote:
>>>>>>
>>>>>> Kyle,
>>>>>>
>>>>>> You say
>>>>>>
>>>>>> Identity credentials are highly unlikely to stop this either which I
>>>>>> suspect is where many in this community would want to turn. Identity
>>>>>> credentials just turn the issue back into a key management problem and we
>>>>>> don’t really have a great way to prevent a user from sharing their keys
>>>>>> with their agent. That problem persists whether the system has a delegation
>>>>>> solution or not too.
>>>>>>
>>>>>> I think there may be an important "but" to this. I think some of the
>>>>>> things you suggest later may relate to it, or some of the ideas that Will
>>>>>> discussed later. I'm definitely sure that there has been much more
>>>>>> discussion about things like this and more attempted approaches to similar
>>>>>> things that I am aware, as I still consider myself a newbie here. However,
>>>>>> let me state my view...
>>>>>>
>>>>>> While you can't prevent a user from sharing their keys with their
>>>>>> agent, you can have, like you said "pseudo-reputation" systems attached to
>>>>>> keys, that take time and good contributions to build, and are deteriorated
>>>>>> when providing lower quality contributions. I believe this can be achieved
>>>>>> without systematically breaking sovereignty. These hypothetical system(s)
>>>>>> could span across multiple mediums, they don't need to be constrained to
>>>>>> single contexts, and be optional and complementary rather than strictly
>>>>>> enforced, but they could help both as a deterrent for people haphazardly
>>>>>> sharing unfiltered AI contents (I refuse to use the word slop because I
>>>>>> feel it has connotations that challenge civil conversations and is pretty
>>>>>> much a slur, even if I understand what people mean by it), and as a way for
>>>>>> people to identify and neutralize persistent sources of it.
>>>>>>
>>>>>> In my view, this is no different to what we already do in our
>>>>>> physical embodied life. We have face recognition embedded into us (most of
>>>>>> us), and we learn to create an internal opinion of other people based on
>>>>>> their interactions with us. When somebody consistently steals our time with
>>>>>> pointless drivel and unfiltered contributions, we don't need to put them in
>>>>>> jail, put a sign over their heads that says they are unworthy, or
>>>>>> (generally speaking) prohibit them from participating in public life. We
>>>>>> simply don't pay as much attention to them, because we know who they are
>>>>>> and what their usual approach to contributions is. Identity online simply
>>>>>> can replace the face recognition in a way that is more flexible and
>>>>>> preserves sovereignty better, as well as being better equipped to deal with
>>>>>> the volume.
>>>>>>
>>>>>> As I said, I'm sure I am unaware of the extent to which similar ideas
>>>>>> have been proposed and explored. I am also very aware that in the same way
>>>>>> that some people here are using questionable predictions of what AI *
>>>>>> will become *that, whether grounded or not, remain just a prediction
>>>>>> and not a current reality that can be wielded as a definitive argument for
>>>>>> what to do right now; what I am discussing here is also a prediction or a
>>>>>> hope, rather than a current reality. But in the same way that I think it's
>>>>>> valid to work towards better AI tools, I think it's valid to work towards
>>>>>> systems that enable us to better * filter through the ocean of
>>>>>> information* in ways that respect sovereignty for all sides
>>>>>> involved, can be personalized, and respect our own intelligence. I think
>>>>>> it's a dream worth pursuing, and I believe it relates directly to the
>>>>>> current matter.
>>>>>>
>>>>>> But in the meantime, I feel that discussing like we are doing seems
>>>>>> to already be shaping a lot of moderate people's views into compromises
>>>>>> that may make this mailing list more comfortable for everybody involved.
>>>>>> One way or another, we will find out.
>>>>>>
>>>>>> *Juan Casanova Jaquete*
>>>>>>
>>>>>> Assistant Professor – School of Engineering and Physical Sciences –
>>>>>> Data Science GA Programme
>>>>>>
>>>>>> *j.casanova@hw.ac.uk* <j.casanova@hw.ac.uk> – Earl Mountbatten
>>>>>> Building 1.31 (Heriot Watt Edinburgh campus)
>>>>>>
>>>>>>
>>>>>>
>>>>>> Email is an asynchronous communication method. I do not expect and
>>>>>> others should not expect immediate replies. Reply at your earliest
>>>>>> convenience and working hours.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am affected by Delayed Sleep Phase Disorder. This means that I am
>>>>>> an extreme night owl. My work day usually begins at 14:00 Edinburgh time,
>>>>>> and I often work late into the evening and on weekends. Please try to take
>>>>>> this into account where possible.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------
>>>>>> *From:* Kyle Den Hartog <kyle@pryvit.tech>
>>>>>> *Sent:* Sunday, April 19, 2026 06:28
>>>>>> *To:* Steve Capell <steve.capell@gmail.com>
>>>>>> *Cc:* Melvin Carvalho <melvincarvalho@gmail.com>; Marcus Engvall <
>>>>>> marcus@engvall.email>; Manu Sporny <msporny@digitalbazaar.com>;
>>>>>> public-credentials@w3.org <public-credentials@w3.org>
>>>>>> *Subject:* Re: The Slopification of the CCG
>>>>>>
>>>>>> You don't often get email from kyle@pryvit.tech. Learn why this is
>>>>>> important <https://aka.ms/LearnAboutSenderIdentification>
>>>>>> ****************************************************************
>>>>>> Caution: This email originated from a sender outside Heriot-Watt
>>>>>> University.
>>>>>> Do not follow links or open attachments if you doubt the authenticity
>>>>>> of the sender or the content.
>>>>>> ****************************************************************
>>>>>>
>>>>>>
>>>>>> In case it helps, here’s how things are going in terms of AIPREFs WG
>>>>>> and the impact on search crawlers:
>>>>>>
>>>>>> https://x.com/grittygrease/status/2044152662673752454?s=20
>>>>>>
>>>>>> In other words, we don’t really have any enforcement mechanisms here
>>>>>> to stop this. In fact I highly suspect some people are using them in this
>>>>>> conversation right now unless their writing styles dramatically changed in
>>>>>> the past few years. My email client started noticing it via machine
>>>>>> learning I suspect and filtering threads to my spam inbox like this most of
>>>>>> the time given I engage a lot less these days. Personally that’s been a
>>>>>> good enough solution for me.
>>>>>>
>>>>>> Identity credentials are highly unlikely to stop this either which I
>>>>>> suspect is where many in this community would want to turn. Identity
>>>>>> credentials just turn the issue back into a key management problem and we
>>>>>> don’t really have a great way to prevent a user from sharing their keys
>>>>>> with their agent. That problem persists whether the system has a delegation
>>>>>> solution or not too.
>>>>>>
>>>>>> So where do we go? I’m not exactly sure. Here’s the leading theories
>>>>>> and their tradeoffs that stand out to me for the generalized solution of AI
>>>>>> generated content:
>>>>>>
>>>>>> 1. https://www.ystrickler.com/the-dark-forest-theory-of-the-internet/
>>>>>> - users just stop engaging in these spaces and retreat to closed door
>>>>>> forums. Then we lose the open collaboration that made the Web great.
>>>>>>
>>>>>> 2. Re-hash DRM debate by making it so users can’t actually access
>>>>>> their keys used to sign their identity credentials. This seems to be the
>>>>>> current path governments like. It optimizes enforcement but also entrenches
>>>>>> access to the Web around a select number of OSes and reduces who’s allowed
>>>>>> to access and contribute to conversations on the Web. I also see that as a
>>>>>> bit short sighted.
>>>>>>
>>>>>> 3. Re-introduce fingerprinting (and pseudo reputation to that
>>>>>> fingerprint) based identity like what CAPTCHAs do. That works well for
>>>>>> service side enforcement but in mailing lists like these not so much. So
>>>>>> likely will need user controlled filtering like what spam filters in email
>>>>>> clients do as well.
>>>>>>
>>>>>> 4. Is the most interesting but most unproven. We shift how people are
>>>>>> reachable and build out Horton Protocol like what Mark Miller proposed
>>>>>> years ago at ActivityPub conf. They may have already tried this and had
>>>>>> issues. I’m not exactly sure:
>>>>>> https://www.youtube.com/watch?v=NAfjEnu6R2g
>>>>>>
>>>>>> In any case though, we don’t have much of a solution right now in our
>>>>>> particular forum and outside things like 3, I don’t expect much to change
>>>>>> in a coordinated manner right now. Looking forward to seeing what we come
>>>>>> up with though over the next decade and hopefully the trade offs we make
>>>>>> don’t take away too much of what originally made the Web great.
>>>>>>
>>>>>> -Kyle
>>>>>>
>>>>>>
>>>>>> -------- Original Message --------
>>>>>> On Sunday, 04/19/26 at 13:10 Steve Capell <steve.capell@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> Challenge : there’s an increasing amount of AI generated content
>>>>>> that, whilst possibly containing useful insights, takes more time to read
>>>>>> than to generate and, given the size of this mailing list, is likely to
>>>>>> lead most of us to unsubscribe, rendering the list worthless
>>>>>>
>>>>>> Constraint : AI used well is a genuinely useful tool and can
>>>>>> dramatically improve quality of output. “Used well” is key and,
>>>>>> unfortunately, many do not use it so well. Nevertheless, this group can’t
>>>>>> become anti-LLM luddites or this list may equally become worthless for the
>>>>>> opposite reason
>>>>>>
>>>>>> Goal : to continue to enjoy intelligent discussions between real
>>>>>> humans that feel empowered to use AI to improve the value of their human
>>>>>> contributions. So the goal, it seems to me is not to block AI content but
>>>>>> rather to block content that has little evidence of human analysis and
>>>>>> interpretation. Perhaps counterintuitively, LLMs themselves might be the
>>>>>> best tool to detect such content
>>>>>>
>>>>>> Proposal : rather than continuing to discuss whether AI content on
>>>>>> this list is good or bad, let’s collectively agree a rubric in the form of
>>>>>> an AI prompt that can act as an automated list moderator. The rubric
>>>>>> should focus on requiring evidence of human assessment rather than blocking
>>>>>> AI content
>>>>>>
>>>>>> I had a go at this myself with several of the messages in this thread
>>>>>> and earlier ones and it seemed quite effective at blocking the ones that I
>>>>>> would have blocked myself. I know that there is a token cost associated
>>>>>> with such a moderator but I for one would delighted to contribute.
>>>>>>
>>>>>> Disclaimer : this message was written with blurry eyes and fat thumbs
>>>>>> on my iPhone - with no AI assistance whatsoever
>>>>>>
>>>>>> Kind regards
>>>>>>
>>>>>> Steven Capell
>>>>>> UN/CEFACT Vice-Chair
>>>>>> Mob: +61 410 437854
>>>>>>
>>>>>> On 19 Apr 2026, at 10:03 am, Melvin Carvalho <
>>>>>> melvincarvalho@gmail.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ne 19. 4. 2026 v 1:49 odesílatel Marcus Engvall <marcus@engvall.email>
>>>>>> napsal:
>>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I’m glad to see that we have some healthy discourse in this thread
>>>>>> with a variety of views. I would like to address some of the points made.
>>>>>>
>>>>>> On 18 Apr 2026, at 01:50, Melvin Carvalho <melvincarvalho@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> LLMs have the advantage that they know most or all of the specs
>>>>>> inside-out, due to their training. Most humans (with notable exceptions),
>>>>>> including on this list, have partial understanding of the complete works of
>>>>>> web standards.
>>>>>>
>>>>>>
>>>>>> This is a real advantage that these tools have and it should not be
>>>>>> understated. I use them professionally for referential lookups and for
>>>>>> confirming hypotheses, and I have no doubt that they have the ability to
>>>>>> accelerate otherwise excellent standards work. But I am also careful to not
>>>>>> fall into the trap of assuming that their lexical consistency can fully
>>>>>> substitute for human judgement. LLMs are probabilistic models with
>>>>>> encyclopaedic knowledge, they are not deterministic oracles with the
>>>>>> capacity to rigorously derive that same knowledge. In the context of the
>>>>>> kind of work done in this group I think it is important to not confuse the
>>>>>> two. I trust an LLM to give me a comprehensive overview of a standards
>>>>>> framework - I do not, however, trust it to prescribe the framework itself
>>>>>> without and human review and editorial judgement.
>>>>>>
>>>>>> I do however concede on your point on testing methodology, and I
>>>>>> think you raise a good point that Manu eloquently touched on.
>>>>>>
>>>>>>
>>>>>> Good points. However LLMs outperform humans on medical exams,
>>>>>> olympiad questions and many other tests, often by wide margins. They are
>>>>>> much more than prediction machines or probabilistic guessers. What I'm
>>>>>> saying is that I predict LLMs would exceed humans in the standards setting
>>>>>> on any quantitative evaluation. We just have not the tools to evaluate yet.
>>>>>> However, I believe the picture will be much clearer one year from now.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 18 Apr 2026, at 02:24, Manu Sporny <msporny@digitalbazaar.com>
>>>>>> wrote:
>>>>>>
>>>>>> Technology transitions, especially ones around human communication can
>>>>>> be rough to navigate. This one is no different, and sometimes it takes
>>>>>> decades to figure out the norms around a new medium (the printed page,
>>>>>> radio, television, BBSes, mailing lists, AOL, ICQ, Napster, Twitter,
>>>>>> Digg/Reddit/Discord, and so on).
>>>>>>
>>>>>>
>>>>>> You are completely right that this is a transition, and I think we
>>>>>> are all trying to map this new technology onto our existing mental models
>>>>>> of what discourse should and could be. Friction and contention is bound to
>>>>>> arise. It is clearly counterproductive, as you and later Amir rightly
>>>>>> stated, to enforce neo-Luddism and reject the technology wholesale.
>>>>>>
>>>>>> My point however is that the ability to passively follow and
>>>>>> occasionally contribute to developments and discussions in this group is
>>>>>> immensely valuable, both commercially and technically. Compressing the
>>>>>> signal-to-noise ratio raises the bar for both comprehension and
>>>>>> participation, and my fear is that the inevitable intractability will, as
>>>>>> you pointed out in the other thread, overwhelm people and alienate them,
>>>>>> especially those of us with many other commitments and who do not have the
>>>>>> time or ability to participate in every group call. That said, it
>>>>>> is, as you suggested, our responsibility to moderate our own information
>>>>>> ingestion, as has been the case for time immemorial in any rhetorical forum.
>>>>>>
>>>>>> Perhaps LLMs will simply change the structure of how discourse is
>>>>>> conducted in forums like these rather than drown it out, as some other
>>>>>> writers have suggested in the thread. If the cost to contribute text tends
>>>>>> to zero, naturally the valuable discussions will shift elsewhere to forums
>>>>>> that still have a cost, such as the group calls. I just hope the work
>>>>>> doesn’t lose the diversity of opinions that is crucial to develop a refined
>>>>>> and well-considered standard.
>>>>>>
>>>>>> --
>>>>>> Marcus Engvall
>>>>>>
>>>>>> Principal—M. Engvall & Co.
>>>>>> mengvall.com
>>>>>>
>>>>>> ------------------------------
>>>>>>
>>>>>> Founded in 1821, Heriot-Watt is a leader in ideas and solutions. With
>>>>>> campuses and students across the entire globe we span the world, delivering
>>>>>> innovation and educational excellence in business, engineering, design and
>>>>>> the physical, social and life sciences. This email is generated from the
>>>>>> Heriot-Watt University Group, which includes:
>>>>>>
>>>>>> 1. Heriot-Watt University, a Scottish charity registered under
>>>>>> number SC000278
>>>>>> 2. Heriot- Watt Services Limited (Oriam), Scotland's national
>>>>>> performance centre for sport. Heriot-Watt Services Limited is a private
>>>>>> limited company registered is Scotland with registered number SC271030 and
>>>>>> registered office at Research & Enterprise Services Heriot-Watt University,
>>>>>> Riccarton, Edinburgh, EH14 4AS.
>>>>>>
>>>>>> The contents (including any attachments) are confidential. If you are
>>>>>> not the intended recipient of this e-mail, any disclosure, copying,
>>>>>> distribution or use of its contents is strictly prohibited, and you should
>>>>>> please notify the sender immediately and then delete it (including any
>>>>>> attachments) from your system.
>>>>>>
>>>>>>
Attachments
- image/png attachment: image.png
- image/png attachment: 02-image.png
Received on Thursday, 23 April 2026 18:10:30 UTC