- From: Bumblefudge <bumblefudge@learningproof.xyz>
- Date: Fri, 7 Feb 2025 13:17:46 +0100
- To: aaronngray@gmail.com
- Cc: Social Web Incubator Community Group <public-swicg@w3.org>
- Message-ID: <CAP8tQw2TKHJU1Ja0ZVv9050R0E+qWR+54DBPF9r0WRQWvRmTmQ@mail.gmail.com>
Sounds interesting, I'm thinking Task Force that starts with informational/use-case FEPs to gather more buyin before doing anything normative? I would mention anecdotally that some amount of ML is already used in prod by some bigger mastodon instances for the purposes of "pre-moderation", i.e. flagging inauthentic behavior and/or spam, largely to reduce workload on the moderation team and save bandwidth/storage/garbage-collection surface. I know i'm a broken record on this, but some form of uniform metadata that can track a given Content URI across pre-moderation flagging, moderation actions, etc would be great... See also: https://fosdem.org/2025/schedule/event/fosdem-2025-5813-unlocking-transparency-in-platforms-content-moderation-activities-introducing-dsatdb-a-python-package-for-analyzing-the-digital-services-act-transparency-database/ I think the T&S task force is already very busy with a long to-do list, and would probably appreciate my/our not adding coordination costs, but one thing I think is worth thinking about early on is this metadata/tracking question. Happy to work on that specific little thing, as I think it could be as simple as a sha-256 hash of the JCS-canonicalized "original activity" before an `id` being added by the server that anchors all future metadata (including revisions, new `id`s across migrations, etc). Also helpful for client/server cuz the client could send that hash before a server broadcasts... party on, __bumblefudge On Fri, Feb 7, 2025 at 12:58 PM Aaron Gray <aaronngray@gmail.com> wrote: > Hi Group, > > I would like to briefly bring up in todays meeting an agenda item > concerning the idea of possibly creating an AI, robots, and Agents CG > possibly as a Task Force or longer lived "Agentic Social Group" to use > Melvin's term. This was triggered by Bob's recent post on ActivityPub and > AI. > > https://lists.w3.org/Archives/Public/public-swicg/2025Feb/0000.html > > This would be to look at the possible benefits and also the impacts of AI > on ActivityPub and ActivityStreams ecosystems and into mitigation and usage > in terms of moderation and also the wider implications socially and > technically. > > I know there are cross cutting concerns with moderation and The > ActivityPub Trust and Safety Taskforce. I do feel this is a separate > parallel, much wider area that needs to be looked at in its own merit as > well as in connection with the moderation, security and safety issues. > > We need to establish the knowledge, insight, and people in order to guide > SWICG, ActivityPub and Activity Streams through these technological > developments we find ourselves with, including possible effects on the > protocols from issues like such as AI fuzzing attempts, to social > engineering by AI's, through to how we deal with and approach and > facilitate Avatars and Agents. > > I have to say unfortunately at this point I would not want or be able to > head this group but really feel it needs to be put in place. This is due to > my absence of experience in leading such a group and also due to other > duties and current projects, such as studying AI. But, given my existing > knowledge and insights into the impact of AI I would love to be a > contributributing member and to see this happen. > Kind regards, > > Aaron > -- > Aaron Gray - @AaronNGray@fosstodon.org | @aaronngray@threads.net | > @AaronNGray@Twitter.com > > Independent Open Source Software Engineer, Computer Language Researcher > and Designer, Amateur Type Theorist, Amateur Computer Scientist, > Environmentalist and Climate Science Researcher and Disseminator. > >
Received on Friday, 7 February 2025 12:18:07 UTC