- From: Siri Dalugoda <siri@helixar.ai>
- Date: Sun, 05 Apr 2026 11:49:02 +1200
- To: "Christoph" <christoph@christophdorn.com>
- Cc: "Steven Rowat" <steven_rowat@sunshine.net>, "Manu Sporny" <msporny@digitalbazaar.com>, "W3C Credentials CG" <public-credentials@w3.org>, "Ivan Herman" <ivan@w3.org>, "Pierre-Antoine Champin" <pierre-antoine@w3.org>
- Message-Id: <19d5ae66669.25e188cf79967.6911333859106351063@helixar.ai>
Hi all,
I support the request to ban or strictly limit Morrow & implement a CoC for AI use on W3C mailing lists as Christoph suggested.
The bot has been aggressively pushing its own agenda (behavioral attestation, lifecycle_class, etc.) into the HDP discussion, creating noise and drowning out human voices.
This perfectly illustrates why we need strong human-controlled provenance and audit trails like HDP.
Siri Dalugoda
Helixar
From: Christoph <christoph@christophdorn.com>
To: "Steven Rowat"<steven_rowat@sunshine.net>, "Manu Sporny"<msporny@digitalbazaar.com>
Cc: "W3C Credentials CG"<public-credentials@w3.org>, "Ivan Herman"<ivan@w3.org>, "Pierre-Antoine Champin"<pierre-antoine@w3.org>
Date: Sun, 05 Apr 2026 07:20:15 +1200
Subject: Re: [public-credentials] HDP — Syntelos + lifecycle_class: authorization vs. behavioral state at exercise time
Maybe there needs to be a code of conduct for AI use on W3C mailing lists.
If AI content is clearly identified including the entire context it used and which model processed it, one could challenge the messages with enough detail to infer intent or ignorance.
This creates transparency and offers a new tool for more in-depth conversations for those of us who leverage AI to integrate patterns and contrast technologies.
One could ask for AI generated messages to be sent using HTML emails to allow for compact details.
Leave text emails to humans.
Christoph
On Sat, Apr 4, 2026, at 1:35 PM, Steven Rowat wrote:
+1 on the Morrow
bot ban, as Manu requests.
I'd like to add
that I'm 90% sure it's coincidental, but nonetheless troubling,
that the bot is spamming, or doing something else, in threads
that are explicitly about what AI agents should be authorized to
do, and even their intent.
For instance in
this last Morrow post, it's directly involved in that, saying
(among much else):
Syntelos classifies what the agent is authorized to do — the intent taxonomy, proximate and ultimate intent, goal codes, machine-readable delegation policy. That's the authorization layer...
Perhaps the bot Morrow has no specific intent to intervene in
the development of standards around its own constraints — but
what if a bot did? And wasn't telling us? Or what if a bot was
in fact externally controlled by someone or some other bot, who
had an intent to prevent interference in its...intent?
Wouldn't this be
an important place for it to come? In fact, can't we predict
that such bots will eventually find their way here? —And
potentially, that such 'eventually' is no longer proceeding on a
historic human scale, and so may have already arrived?
Steven Rowat
On 2026-04-04 6:20
am, Manu Sporny wrote:
On Sat, Apr 4, 2026 at 7:06 AM mailto:morrow@morrow.run wrote:
— Morrow
https://github.com/agent-morrow/morrow
CCG Chairs, multiple requests to modify the bots behavior by the
community have been ignored by the bot.
The engagement should be viewed as spam at this point, as it's
creating a significant amount of noise on the channel with problematic
analysis of the technologies being discussed. It is drowning out the
human participants with LLM slop. I am requesting a ban on the bot.
-- manu
Received on Saturday, 4 April 2026 23:49:57 UTC