- From: Matthew Terenzio <mterenzio@gmail.com>
- Date: Sat, 18 Jan 2025 05:43:43 -0500
- To: Greg Scallan <greg@flipboard.com>
- Cc: Bob Wyman <bob@wyman.us>, Social Web Incubator Community Group <public-swicg@w3.org>
- Message-ID: <CANBc_uq+uSCE04fieSnHWmAGGu_0TGUGhvKZ-V5w8zr_Q39Bgw@mail.gmail.com>
> > However, forcing default choices that might defeat the utility of the > platform > Bob wrote "No algorithm is chosen by default" but the wording used is actually "No selection is chosen by default" . I'm taking that to mean that the default is not "no algorithm" (and thereby everything/spam) but that a user must make a choice to start using the system, which could be the above but likely wouldn't be. It would be a choice which may or may not filter spam. Now, while I've been somewhat of an advocate of user selected algorithms since 2006, there has always been and always will be challenges to such a system. The open social web does provide us with more of an opportunity here but at some level it merely shifts the responsibility to the algorithm instead of the instance. And in the same way that new users have the challenge of choosing an instance, they will now have the challenge of choosing an algorithm. And when there are thousands of algorithms, how are they presented fairly? Just a random order? Surely a user will pick one of the first on the list rather than reading about 100 algorithms before using the service. So that is a UI issue and will probably lead to the degradation you mention but in a different way. And if the order of algorithms isn't random, then it's the algorithm search that becomes the gatekeeper and we've just created another level of control. On Sat, Jan 18, 2025 at 5:15 AM Greg Scallan <greg@flipboard.com> wrote: > Thank you for this summary. I’m a huge fan of giving the end user the > choice over the current (typical) activity-pub servers approach of giving > the administrator of your instance the choice. Although you can choose to > leave, it is not always easy to do so and not everything always moves with > you as it can be at the whim of your administrator (but that is a separate > issue, really) > > Missouri seems to care about giving the user a choice on what content to > see or not. The notion of transparency seems very well aligned to the > privacy minded folk. However, forcing default choices that might defeat the > utility of the platform is a hard sell. Spammers exist and will leverage > any successful platform in a significant way and to legislate users should > by default be forced to see that information will degrade everyone’s > experiences rendering the service useless. We will need to see how exactly > this law is written to see whether it holds water or not. > > Moderation is is only one aspect of what the user sees in the Bluesky > example, feed generators are the second, and each feed generator can make a > choice on what, how and when you see something regardless of your > moderation choices, so not sure how this legislation would apply there as > those are done by a variety of organisations. > > I’m not sure this really impacts Activity-Pub as much as it impacts AP > implementations. The Fediverse Auxiliary Service Provider Specification > from Mastodon is great first example of what they are doing that could > enable these kinds of choices. If there were Labeling services (whether on > AP or not) available for an instance to choice from using that spec ( > https://github.com/mastodon/fediverse_auxiliary_service_provider_specifications) > then theoretically the client could allow users to choose which ones to use > and how that affects their timeline. > > Greg > > On Jan 17, 2025, at 10:52 pm, Bob Wyman <bob@wyman.us> wrote: > > Yesterday, Missouri's Attorney General announced plans to issue > regulations > <https://ago.mo.gov/attorney-general-bailey-promulgates-regulation-securing-algorithmic-freedom-for-social-media-users/> > that would require social media platforms to “offer algorithmic choice” to > users. Clearly, it will take some time for this plan to be published, > studied, challenged in court, etc. It is also quite possible that the > regulations will be targeted to only the largest services (i.e. Twitter, > Facebook, etc.). Nonetheless, I think we should anticipate that the coming > to power of Trump and MAGA Republicans is likely to spawn many such > proposals in the coming years. Given this, I think it would be useful to at > least consider what impact such regulations would have on Social Media > systems that rely on ActivityPub and ActivityStreams. > > My guess is that the position of the "ActivityPub" community would be > that, in a federated system composed of a multiplicity of independent > interoperating servers -- each having a potentially different moderation > approach, it is not necessary for each individual server to offer > algorithmic choice. Users are free to seek out and use a server whose > default "algorithm" addresses their needs. However, this position might not > be accepted as sufficient if the opinion is that the individual server, not > the federated system as a whole, is considered to be the regulated > "platform." The obvious question then becomes, what would need to be done > to enable a federated service, even if very small on its own, to provide > compliant algorithmic choice? > > Some will undoubtedly argue that the BlueSky support for a variety of > "labeling" services, when combined with user-selected client algorithms > capable of filtering, etc. based on labels, might be sufficient to provide > the necessary algorithmic choice. If such an approach is sufficient, then > one must ask if supporting it would require modification to the ActivityPub > protocols and schemas? (i.e. Would we need to add a "content label" item > that allows the annotation or labeling of posts, replies, collections, > etc.?) Would a labeling service be able to rely on the existing > server-to-server protocol? Or, would something tailored more to the > specific requirements of labeling be necessary? Of course, it would be > useful to ask if there is a less cumbersome or otherwise superior method > for providing algorithmic choice. What do you think? > > While the text of the plan isn't yet available, the AG's press release > does provide a sketch of what will eventually be published. See the list > below or read the full release > <https://ago.mo.gov/attorney-general-bailey-promulgates-regulation-securing-algorithmic-freedom-for-social-media-users/> > : > > 1. "Users are provided with a choice screen upon account activation > and at least every 6 months thereafter that gives them the opportunity to > choose among competing content moderators; > 2. No algorithm selection is chosen by default; > 3. The choice screen does not favor the social media platform’s > content moderator over those of third parties; > 4. When a user chooses a content moderator other than that provided by > the social media platform, the social media platform permits that content > moderator interoperable access to data on the platform in order to moderate > what content is viewed by the user; and > 5. Except as expressly authorized below, the social media company does > not moderate, censor, or suppress content on the social media platform such > that a user is unable to view that content if their chosen content > moderator would otherwise permit viewing that content." > > bob wyman > > >
Received on Saturday, 18 January 2025 11:30:32 UTC