On Censorship (3rd-Party or consumer-choice?)

After today's meeting, I took a fresh look at the group's 2018 paper,
Technological
Approaches to Improving Credibility Assessment on the Web
<https://www.w3.org/2018/10/credibility-tech/>, and would like to point out
something I think should be included in any future update to that document
as well as in the evaluation rubric currently under development.

The Cred-Tech paper's Section 3.1
<https://www.w3.org/2018/10/credibility-tech/#h.al8ri09fk7mb> provides a
discussion of Censorship but the closest it comes to providing a definition
of "censorship" is when it says:

> 'The regulation of content, called “censorship” in some contexts, is
> controversial.'


In other words, censorship is no more than the "regulation of content." I
think this definition is deficient in that it doesn't address who is doing
the regulation, or by what authority that regulation is being done. It is
one thing for some 3rd-party to regulate what I read, it is a completely
different matter for me to regulate my own reading. The first of these is
often censorship, the latter might be more accurately named something like
curation, or simply selective reading.

The ACLU's definition of censorship addresses my concern to some degree. They
say: <https://www.aclu.org/other/what-censorship>

> "Censorship, the suppression of words, images, or ideas that are
> "offensive," happens whenever some people succeed in *imposing their
> personal political or moral values on others*."


Given this definition, a mere "regulation of content" which results in
"suppression of words, images, or ideas" need not be censorship. Censorship
requires not only regulation but also an imposition of that regulation upon
others. It is only censorship when the suppression is imposed, or forced,
upon consumers. If the consumer chooses the regulation, it isn't censorship.

This is, I think, an important point especially since it should be clear to
all that the mechanisms of automated censorship could be used by
individuals for their own purposes as well as used by 3rd-parties to
regulate others. For instance: If a social media platform decided that it
would refuse to deliver any messages containing certain "offensive" words,
it could easily combine a text scanner and a list of forbidden words to
impose that censorship. However, if the same social media platform
implemented an identical mechanism, but allowed each individual consumer to
define their own list of forbidden words (or not), then there wouldn't be
"censorship." Rather, we'd have "filtering" or "curation" or something
else. The key determinant of censorship is the 3rd-party imposition on
others. The mechanisms used to censor may be identical to those provided to
empower consumers to filter, curate, or select.

This question of who decides what is suppressed, or promoted, is one which
I think should be carefully addressed in building the evaluation rubric
that is the current focus of the group. For each mechanism for suppression
or for credibility determination, we should ask the providers to specify
whether the parameterization of that mechanism is controlled by some
intermediary between the source and consumer (i.e. the platform), or if it
is under the control of the consumer. A third option might be a platform
controlled regime which allows only a binary choice of opting in or out of
some platform default regime. (i.e. If you use the mechanism, you are stuck
with the platform's choices, but you can choose to disable the mechanism.)

My personal opinion is that we should all prefer methods and platforms that
maximize user control over any content suppression or promotion. A word
which is offensive to you may be completely acceptable to me. What signals
credibility to you, may lead me to reject a sources' credibility. The only
way to respect our rightfully held, individual perspectives, is to allow
each of us some individual degree of control over what we see.

Of course, I recognize that most people aren't going to be willing to spend
a great deal of time carefully tending and grooming forbidden word lists,
credibility metrics, etc. So, even if platforms empower consumer-choice,
we're likely to see most people simply accept or decline a platform's
proposed defaults. Nonetheless, there is value in giving consumers these
choices if for no other reason than that they would then be able confirm
their understanding of the default's impact by temporarily disabling it.

We should also recognize that if platforms do begin to empower
consumer-choice, we may see an opportunity to create a new sub-industry of
services that provide innovative filtering which competes with platforms'
default offerings by leveraging platform-provided mechanisms for
implementing consumer-choice. Rather than building my own list of
"credible" journalists, I might subscribe to a trust.txt list provided by
one of several services that each has a different view of what it means to
be credible. Or, I might subscribe to some AI-based service that does
detailed analysis of messages I have read in order to determine what I'm
likely to want to see in the future...

I believe that this distinction between consumer-choice filtering and
platform-imposed censorship is one that can help improve the current
discussion and that should be reflected in the evaluation rubric. Does this
make sense? Your comments will be welcomed.

bob wyman

Received on Friday, 15 October 2021 22:07:45 UTC