Article 19 compliance of Credibility and Content Moderation systems

As discussed during yesterday's meeting, I believe that the rubric for
evaluating credibility and content moderation systems should include an
evaluation of compliance with the requirements of, at least, the Universal
Declaration of Rights
<https://www.un.org/en/about-us/universal-declaration-of-human-rights>'
Article 19, which reads:

> *Everyone has the right to freedom of opinion and expression; this right
> includes freedom to hold opinions without interference and to seek, receive
> and impart information and ideas through any media and regardless of
> frontiers.*


Clearly, Credibility or content moderation systems may interfere with the
explicitly enumerated Article 19 rights of:

   - Freedom to "impart information and ideas" (i.e. Freedom of speech), and
   - Freedom to "seek [and] receive" information and ideas.

Ideally, a system would not abridge one's ability to exercise these rights,
or any other rights. Nonetheless, it must be recognized that applicable law
or regulation may require some abridgement. For instance, in various
jurisdictions, certain kinds of expression are illegal (e.g. child-porn or
"disrespect of the monarch"...). Additionally, the ability to freely seek
and receive information is restricted by various laws or principles,
including those intended to preserve privacy (e.g. by the UDR's Article 12
or the EU's GDPR <https://gdpr-info.eu/>) or national security (e.g.
national defense secrets). Providers within any particular market are
generally unable to avoid the requirements of law, however, they may vary
dramatically in the degree to which their abridgement of rights exceeds
that minimum required by law. Thus, the rubric metric that should apply to
providers is a measure of the degree with which any abridgement of Article
19 rights exceeds that minimum abridgement which may be required by
applicable law, the UDR itself, or of applicable international law.

I think it important to understand that allowing users to restrict their
own receipt or exposure to information is not limited by Article 19. (e.g.
I may choose to filter out data or messages that contain what I consider to
be obscene words, or that is provided by persons I consider to be
uncredible) Thus, it is important to distinguish between restrictions
imposed, non-optionally, by systems (aka: censorship), and those which are
the result of user's choices (aka: curation).

Given the considerations above, I suggest that the evaluation rubric
include questions which are at least similar to those below:

==============
1) Does the system's implementation, or the systems' operational and
management policies:

   - Restrict users' ability to impart information and ideas: [Yes/No]
      - What, if any, restrictions are required by law?
      - What, if any, restrictions are greater than the minimum required by
      law?
   - Restrict users' ability to seek and receive information and ideas:
   [Yes/No]
      - What, if any, restrictions are required by law?
      - What, if any, restrictions are greater than the minimum required by
      law?

2) What, if any, tools, mechanisms, etc. are provided to allow users to
limit their own ability to seek or receive information or ideas? (including
mechanisms to filter, prioritize, etc.)
==============

I would appreciate your thoughts and comments.

bob wyman

Received on Thursday, 7 April 2022 20:14:31 UTC