Censorship vs Curation (Who controls what you see? Let's Empower the user!)

I am very concerned that the "Mastodon'' model for the Fediverse relies too
much on third-party administrators' filtering and blocking of message flows
between post senders and potential receivers. While I recognize that this
has proved useful, and is even claimed by many to be a distinctly
beneficial feature of the Mastodon culture of social interaction, I believe
that we should be developing systems that rely less on instance level
filtering or blocking and more on empowering individuals to make informed,
effective, and personal choices for the curation of what they see. The
Mastodon model of enabling each of many instance operator to individually
determine what is blocked is certainly better than forcing everyone to
submit to a single operator's decisions and value judgements (i.e. Twitter,
etc.). However, I think we can do even better. We can, and should, empower
each individual user.

Empowering individual choice and curation will certainly become more and
more necessary as the Fediverse grows and as it includes a greater number
of instances serving large populations. This is true since, as the
population served by a single instance grows, it must inevitably become
more difficult, if not impossible, to ensure that any single set of
moderation decisions properly reflects the values and sensitivities of all
of an instance's users. -- A small group may be able to agree on
commonly acceptable moderation principles, but it is unlikely that a large
group will be able to negotiate a set of principles equally accepted by
all. As an instance's population grows, more and more of those served by
that instance will feel that their own personal perspectives are not being
properly respected or supported -- that moderation is either more or less
aggressive than it should be. But, it isn't only a question of respecting
individual perspectives, there may also be a question of respecting human
rights.

In thinking about these issues, I am heavily influenced by the UN's Universal
Declaration of Human Rights
<https://www.un.org/en/about-us/universal-declaration-of-human-rights>.
Some of the most relevant rights are those found in Article 19
<https://www.un.org/en/about-us/universal-declaration-of-human-rights#:~:text=Article%2019,regardless%20of%20frontiers.>
:

> Everyone has the right to freedom of opinion and expression; this right
> includes freedom to hold opinions without interference and to *seek,
> receive *and impart information and ideas through any media and
> regardless of frontiers.


Unlike the First Amendment of the US Constitution, the UDHR's Article 19
establishes not just a right of Speech, but a right to seek and receive
information and ideas without interference. In essence, it establishes a
right to hear or read, in addition to a right to speak. It is this right,
unfamiliar to many, that may be compromised in some cases of third-party
moderation (particularly on large instances). (Note: Article 19 does not
compel anyone to listen. It only protects the rights to think, to speak,
and for one's speech to be found by those who wish to find it.)

On large instances, moderation which does not universally reflect the
perspectives or desires of the entire user population inevitably
constitutes "censorship" of what can be read by at least some of the
population even if it is approved of by the majority of the population.

Note: Here I use the word "censorship" as defined by the ACLU's page "What
is Censorship?
<https://www.aclu.org/other/what-censorship#:~:text=Censorship%2C%20the%20suppression%20of%20words%2C%20images%2C%20or%20ideas%20that%20are%20%22offensive%2C%22%20happens%20whenever%20some%20people%20succeed%20in%20imposing%20their%20personal%20political%20or%20moral%20values%20on%20others.>
"

> Censorship, the suppression of words, images, or ideas that are
> "offensive," happens *whenever some people succeed in imposing their
> personal political or moral values on others.*


When political and moral values *are* shared, the suppression of the
objectionable is not censorship, rather, I think it more appropriate to
view such suppression as shared-curation. But, when values aren't shared,
moderation almost inevitably constitutes censorship for at least some. (If
one accepts the ACLU definition above, this is not a pejorative statement,
but simply a statement of fact.) Given this, we should recognize that the
implementations of, or the tools used to perform, either censorship or
curation may be identical. What is censorship for one is curation for
another. The distinction between the two is not inherent to the action that
is taken, but rather rooted in whether the action taken reflects either the
values of the potential reader or those of some one or more persons other
than the reader.

It seems inevitable that at least some censorship must be tolerated if only
because, in various jurisdictions, some speech is deemed illegal and may
create legal liability for instance operators. Given this, it is probably
both necessary and desirable to empower instance administrators to remove
(i.e. censor) at least illegal speech -- even without the express consent
of their users. (Note: The USA's "Section 230" liability exemptions do not
exist in all other jurisdictions and may not even continue to exist forever
unmodified in the USA. We should be developing systems that work
internationally and systems that are likely to be independent of changes to
local laws -- systems that can work well with, or without, Section 230.)

But, even though increasing the size of instances will usually lead to some
censorship of legal speech, efforts to avoid censorship will usually lead
to at least some users being forced to endure ever growing quantities of
objectionable or irrelevant content. An attempt to avoid censorship,
without replacing it with curation tools, might make the use of large
instances intolerable for many and is clearly a problem that we should seek
to avoid.

It seems to me that the "solution" to the problem of necessarily restrained
instance-wide moderation on large instances is to allow the scope of
moderation to be constrained by the size of populations that share common
perspectives. In this way, what would otherwise be censorship of legal
speech becomes curation. At the extreme, this would mean increasing the
ability of individuals to select, choose, or define their own personal
methods and algorithms for curating content.

Some indication of the value of personal control over filtering and sorting
of social media posts can be found in Matt Hodges' Mastodon Digest
(see the GitHub
project <https://github.com/hodgesmr/mastodon_digest>). This tool allows
one to prepare a filtered and sorted digest of Mastodon posts which
respects the user's normal Mastodon filtering options while also supporting
parameters such as the length of time included in the digest, the post
scoring algorithm, and the score-threshold for filtering. While Hodges
supports four basic scoring algorithms today, I assume that many more
interesting algorithms could be defined. Even with only four basic
algorithms, Hodges' work demonstrates that user-controlled filtering and
sorting is not only practical but can provide great value.

Today, Hodges' tool only considers attributes embedded in the individual
posts. But, I imagine that it could be extended to consider things such as
"credibility" signals defined by the W3C Credible Web Community Group
<https://www.w3.org/community/credibility/>, or credentials defined by the W3C
Credentials Community Group <https://www.w3.org/community/credentials/>.
(i.e. if I'm a chemist, I might like to boost the score of posts from
certified chemists. Or, I might like to suppress posts of authors whose
credibility is questioned by being found on lists published by a Fact
Checking group I trust.) If ActivityPub were to include support for the
kind of "labeling" discussed for BlueSky, such labels also could be used to
feed personal algorithms.

My feeling is that we've become overly focused on third-party filtering and
sorting, not because it is inherently good, but rather because that model
most easily facilitates monetization and because platforms like Facebook,
Twitter, etc. have seen no great value in providing users with fine-grained
control over their feeds. As a result, most of us simply haven't had the
opportunity to experience, research, or develop the potentially large and
interesting realm of user-controlled filtering and sorting methods,
algorithms, etc. Given increased use and interest in non-commercial social
media, this is an opportunity to explore what we've been prevented from
doing in the past.

bob wyman

Received on Tuesday, 6 December 2022 23:32:02 UTC