Re: How would "algorithmic choice" laws/regulations impact ActivityPub?

Dan Smith,
All,

Thank you. Brainstorming, to your points, there could be a toggle setting for administrators to choose whether or not to enable or activate any of these features. In theory, algorithms and accompanying content (e.g., resources for configuring the algorithms, documentation for the algorithms and their settings, etc.) could be added and upgraded by administrators in the form of modular software plugins or extensions.

Algorithmic choice can equip and empower end-users. However, concerns about algorithmic choice include those from personalization: cognitive biases, filter bubbles, polarization, radicalization, and so forth.

Yes, I agree; this is an interesting thread. Thanks to Bob Wyman.


Best regards,
Adam Sobieski

P.S.: If you haven’t already read it, here is an excellent essay by John H. Cochrane in The Digitalist Papers expressing opposition to the regulation of AI: https://www.digitalistpapers.com/essays/ai-society-and-democracy-just-relax . Perhaps some of his arguments there are also applicable to the regulation of social media?

________________________________
From: Micajah Poesy <opened.to@gmail.com>
Sent: Saturday, January 18, 2025 6:08 PM
To: Adam Sobieski <adamsobieski@hotmail.com>
Cc: Bob Wyman <bob@wyman.us>; Matthew Terenzio <mterenzio@gmail.com>; Greg Scallan <greg@flipboard.com>; Social Web Incubator Community Group <public-swicg@w3.org>
Subject: Re: How would "algorithmic choice" laws/regulations impact ActivityPub?


I'm just an innocent bystander on this list I hardly ever post. But I've been interested in basically starting my own social platform for a while now ever since Twitter was bought out. I think one thing that should be kept in mind is that when Twitter was purchased not only did they get the present operations of the platform but all of the databases going back to the beginning I guess. That's very concerning. But on the topic of this algorithmic choice that you're speaking about. The one thing that very much so is a pleasure to me about blue sky is that it's open source. I had heard about it a while back and I wish I had looked into it more back then but it wasn't so obvious about the tendencies politically etc that are so glaringly apparent now. And I think a lot of people are looking for a respite from that. The thing of it is with it being open source is not only can you participate on their main platform but from my understanding you can start your own version of Blue sky that wouldn't necessarily be named Blue sky. And that doesn't necessarily have to include the public that could be a private version I mean these types of issues I understand are often discussed in the federated space. But I'm just saying here is a publicly used platform that also can be used by other people to make whatever version they want. And that's my main response about whether or not that can be controlled by the government. Is that supposedly I guess what they're saying is every single person who publishes a social platform of whatever type even for their own personal use has to offer potential users varieties of choices. I don't know if that's going to fly. That's all I have to say. Thank you very much this is a very interesting thread.
Dan Smith

On Sat, Jan 18, 2025, 3:22 PM Adam Sobieski <adamsobieski@hotmail.com<mailto:adamsobieski@hotmail.com>> wrote:
Bob Wyman,
All,

Hello. In addition to empowering end-users to be able to select between algorithms, there is also empowering them with respect to configuration and settings. One might want to both list and describe algorithms for end-users and to provide end-users with clear and intuitive means for configuring those algorithms.
Some algorithms have subcomponents which could be separately configured. For example, an algorithm might have both a filtering subcomponent (e.g., "WHERE") and an ordering subcomponent (e.g., "ORDER BY"). Configuration and settings pages, for an algorithm, could, then, be structured, providing multiple pages or multiple sections of a page, per its configurable parts or subcomponents.
For some algorithms, end-users might not be able to immediately see and explore any differences resulting from their selections or configurations. Some algorithms might need time and/or data to warm up. In any event, in addition to platform help and documentation, some users could create and share videos showing, comparing, contrasting, and otherwise discussing experiences for selections and configurations of algorithms on platforms.



Best regards,
Adam Sobieski
________________________________
From: Bob Wyman <bob@wyman.us<mailto:bob@wyman.us>>
Sent: Saturday, January 18, 2025 1:13 PM
To: Matthew Terenzio <mterenzio@gmail.com<mailto:mterenzio@gmail.com>>
Cc: Greg Scallan <greg@flipboard.com<mailto:greg@flipboard.com>>; Social Web Incubator Community Group <public-swicg@w3.org<mailto:public-swicg@w3.org>>
Subject: Re: How would "algorithmic choice" laws/regulations impact ActivityPub?

Matthew Terenzio wrote: "Bob wrote "No algorithm is chosen by default" but the AG's wording is actually "No selection is chosen by default" ."

Thanks for catching this transcription error. I was writing personal notes in another document in which I had added the "algorithm" word to show why it wasn't useful in this context. I then cut and pasted from the wrong source. In fact, the selection of no algorithm is a selection of the null-algorithm and thus the selection of an algorithm. However, it seems to me that in such a system the null-algorithm should be allowed as a default even if no other algorithm is. Of course, I recognize all the problems with the null-algorithm. I also recognize that few users would select it if they knew the consequences. However, while I can see some justification for the regulation of systems that censor or manipulate the content of public discourse, I find it harder to see a justification for the regulation of systems that don't. It seems to me that the null-algorithm should be considered a special-case and that there is a possibly "legitimate government interest"<https://en.wikipedia.org/wiki/Government_interest> only when something other than the null-algorithm is in use.

I am also very concerned about the AG's requiring that "platforms" present "screens" to users. This requirement might be used by some platforms to argue that they are legally barred from supporting independently developed clients since such clients might interfere with their ability to present "screens" to users. Some platforms might even make a similar argument supporting their decision to avoid federation with other systems or the inclusion of their content in various aggregations.

Who is responsible for presenting these selection screens? Is it the "server" or the "client?" Also, if a client supports multiple platforms, must a screen be presented for each platform or is a single screen, presented by the client, sufficient? This leads to the question: In a federated system, is each independent server considered to be a "platform," or is the federation as a whole considered a single platform? Given that there is no central authority in a federated system, who can be held responsible for the behavior of the federation as a whole, or even in part? There are many technical and practical issues that don't seem to have been addressed by the Missouri AG.

Note: When copying the AG's text, I also fixed their numbered list so that requirements were sequentially numbered rather than all shown as item "1."

bob wyman


On Sat, Jan 18, 2025 at 5:43 AM Matthew Terenzio <mterenzio@gmail.com<mailto:mterenzio@gmail.com>> wrote:
However, forcing default choices that might defeat the utility of the platform

Bob wrote "No algorithm is chosen by default" but the  wording used is actually "No selection is chosen by default" . I'm taking that to mean that the default is not "no algorithm" (and thereby everything/spam) but that a user must make a choice to start using the system, which could be the above but likely wouldn't be. It would be a choice which may or may not filter spam.

Now, while I've been somewhat of an advocate of user selected algorithms since 2006, there has always been and always will be challenges to such a system. The open social web does provide us with more of an opportunity here but at some level it merely shifts the responsibility to the algorithm instead of the instance. And in the same way that new users have the challenge of choosing an instance, they will now have the challenge of choosing an algorithm.

And when there are thousands of algorithms, how are they presented fairly? Just a random order? Surely a user will pick one of the first on the list rather than reading about 100 algorithms before using the service. So that is a UI issue and will probably lead to the degradation you mention but in a different way.

And if the order of algorithms isn't random, then it's the algorithm search that becomes the gatekeeper and we've just created another level of control.


On Sat, Jan 18, 2025 at 5:15 AM Greg Scallan <greg@flipboard.com<mailto:greg@flipboard.com>> wrote:
Thank you for this summary.  I’m a huge fan of giving the end user the choice over the current (typical) activity-pub servers approach of giving the administrator of your instance the choice.  Although you can choose to leave, it is not always easy to do so and not everything always moves with you as it can be at the whim of your administrator (but that is a separate issue, really)

Missouri seems to care about giving the user a choice on what content to see or not.  The notion of transparency seems very well aligned to the privacy minded folk. However, forcing default choices that might defeat the utility of the platform is a hard sell. Spammers exist and will leverage any successful platform in a significant way and to legislate users should by default be forced to see that information will degrade everyone’s experiences rendering the service useless.  We will need to see how exactly this law is written to see whether it holds water or not.

Moderation is is only one aspect of what the user sees in the Bluesky example, feed generators are the second, and each feed generator can make a choice on what, how and when you see something regardless of your moderation choices, so not sure how this legislation would apply there as those are done by a variety of organisations.

I’m not sure this really impacts Activity-Pub as much as it impacts AP implementations. The Fediverse Auxiliary Service Provider Specification from Mastodon is great first example of what they are doing that could enable these kinds of choices.  If there were Labeling services (whether on AP or not) available for an instance to choice from using that spec (https://github.com/mastodon/fediverse_auxiliary_service_provider_specifications) then theoretically the client could allow users to choose which ones to use and how that affects their timeline.

Greg

On Jan 17, 2025, at 10:52 pm, Bob Wyman <bob@wyman.us<mailto:bob@wyman.us>> wrote:

Yesterday, Missouri's Attorney General announced plans to issue regulations<https://ago.mo.gov/attorney-general-bailey-promulgates-regulation-securing-algorithmic-freedom-for-social-media-users/> that would require social media platforms to “offer algorithmic choice” to users. Clearly, it will take some time for this plan to be published, studied, challenged in court, etc. It is also quite possible that the regulations will be targeted to only the largest services (i.e. Twitter, Facebook, etc.). Nonetheless, I think we should anticipate that the coming to power of Trump and MAGA Republicans is likely to spawn many such proposals in the coming years. Given this, I think it would be useful to at least consider what impact such regulations would have on Social Media systems that rely on ActivityPub and ActivityStreams.

My guess is that the position of the "ActivityPub" community would be that, in a federated system composed of a multiplicity of independent interoperating servers -- each having a potentially different moderation approach, it is not necessary for each individual server to offer algorithmic choice. Users are free to seek out and use a server whose default "algorithm" addresses their needs. However, this position might not be accepted as sufficient if the opinion is that the individual server, not the federated system as a whole, is considered to be the regulated "platform." The obvious question then becomes, what would need to be done to  enable a federated service, even if very small on its own, to provide compliant algorithmic choice?

Some will undoubtedly argue that the BlueSky support for a variety of "labeling" services, when combined with user-selected client algorithms capable of filtering, etc. based on labels, might be sufficient to provide the necessary algorithmic choice. If such an approach is sufficient, then one must ask if supporting it would require modification to the ActivityPub protocols and schemas? (i.e. Would we need to add a "content label" item that allows the annotation or labeling of posts, replies, collections, etc.?)  Would a labeling service be able to rely on the existing server-to-server protocol? Or, would something tailored more to the specific requirements of labeling be necessary? Of course, it would be useful to ask if there is a less cumbersome or otherwise superior method for providing algorithmic choice. What do you think?

While the text of the plan isn't yet available, the AG's press release does provide a sketch of what will eventually be published. See the list below or read the full release<https://ago.mo.gov/attorney-general-bailey-promulgates-regulation-securing-algorithmic-freedom-for-social-media-users/>:

  1.  "Users are provided with a choice screen upon account activation and at least every 6 months thereafter that gives them the opportunity to choose among competing content moderators;
  2.  No algorithm selection is chosen by default;
  3.  The choice screen does not favor the social media platform’s content moderator over those of third parties;
  4.  When a user chooses a content moderator other than that provided by the social media platform, the social media platform permits that content moderator interoperable access to data on the platform in order to moderate what content is viewed by the user; and
  5.  Except as expressly authorized below, the social media company does not moderate, censor, or suppress content on the social media platform such that a user is unable to view that content if their chosen content moderator would otherwise permit viewing that content."

bob wyman

Received on Sunday, 19 January 2025 03:40:10 UTC