Re: Article 19 compliance of Credibility and Content Moderation systems

Adeel,
It seems to me that something like a decision to refuse carriage of RT
would qualify as an "operational or management policy," not necessarily
required by law, unless there were economic or other sanctions that
prevented dealing with Russian entities. Thus, one using the rubric
could identify the policy and, depending on the state of affairs, describe
it as either a restriction not required by law or one that was, in fact,
required by law.

In some cases, a service might provide RT carriage, but allow its users to
set curation rules which would prevent them seeing RT content in response
to searches or other content viewing operations. One could also imagine the
creation of one or more "Credibility Services" that would rank many
speakers according to some service-specific procedure. (For which a rubric
should be developed.) A user might then direct a platform to either enhance
or diminish content visibility based on one or more Credibility Service's
ratings. These would be cases of user-directed curation rather than
platform-imposed censorship.

Given that this situation appears to be readily handled by the proposed
rubric extensions, I'm curious, were you objecting to such policies or were
you simply trying to provide an example of the rubric's utility?

bob wyman

On Thu, Apr 7, 2022 at 6:45 PM Adeel <aahmad1811@gmail.com> wrote:

> Hello,
>
> What would you restrict access to if you are already working in a filtered
> echo chamber, there won't be much diversity of opinions to seek credibility
> on?
>
> e.g ukraine-russia coverage - the west bans media outlets that don't agree
> with their narrative, e.g like RT, so now you only have a filtered view of
> the conflict as every other mainstream media outlet is then covering and
> amplifying it the same way with the same narrative, not much left to
> restrict, if your choice is already restricted.
>
> This is is a perfect example of freedom of speech and opinion that is a
> facade, irrespective of frontiers.
>
> Thanks,
>
> Adeel
>
> On Thu, 7 Apr 2022 at 21:14, Bob Wyman <bob@wyman.us> wrote:
>
>> As discussed during yesterday's meeting, I believe that the rubric for
>> evaluating credibility and content moderation systems should include an
>> evaluation of compliance with the requirements of, at least, the Universal
>> Declaration of Rights
>> <https://www.un.org/en/about-us/universal-declaration-of-human-rights>'
>> Article 19, which reads:
>>
>>> *Everyone has the right to freedom of opinion and expression; this right
>>> includes freedom to hold opinions without interference and to seek, receive
>>> and impart information and ideas through any media and regardless of
>>> frontiers.*
>>
>>
>> Clearly, Credibility or content moderation systems may interfere with the
>> explicitly enumerated Article 19 rights of:
>>
>>    - Freedom to "impart information and ideas" (i.e. Freedom of speech),
>>    and
>>    - Freedom to "seek [and] receive" information and ideas.
>>
>> Ideally, a system would not abridge one's ability to exercise these
>> rights, or any other rights. Nonetheless, it must be recognized that
>> applicable law or regulation may require some abridgement. For instance, in
>> various jurisdictions, certain kinds of expression are illegal (e.g.
>> child-porn or "disrespect of the monarch"...). Additionally, the ability to
>> freely seek and receive information is restricted by various laws or
>> principles, including those intended to preserve privacy (e.g. by the UDR's
>> Article 12 or the EU's GDPR <https://gdpr-info.eu/>) or national
>> security (e.g. national defense secrets). Providers within any particular
>> market are generally unable to avoid the requirements of law, however, they
>> may vary dramatically in the degree to which their abridgement of rights
>> exceeds that minimum required by law. Thus, the rubric metric that should
>> apply to providers is a measure of the degree with which any abridgement of
>> Article 19 rights exceeds that minimum abridgement which may be required by
>> applicable law, the UDR itself, or of applicable international law.
>>
>> I think it important to understand that allowing users to restrict their
>> own receipt or exposure to information is not limited by Article 19. (e.g.
>> I may choose to filter out data or messages that contain what I consider to
>> be obscene words, or that is provided by persons I consider to be
>> uncredible) Thus, it is important to distinguish between restrictions
>> imposed, non-optionally, by systems (aka: censorship), and those which are
>> the result of user's choices (aka: curation).
>>
>> Given the considerations above, I suggest that the evaluation rubric
>> include questions which are at least similar to those below:
>>
>> ==============
>> 1) Does the system's implementation, or the systems' operational and
>> management policies:
>>
>>    - Restrict users' ability to impart information and ideas: [Yes/No]
>>       - What, if any, restrictions are required by law?
>>       - What, if any, restrictions are greater than the minimum required
>>       by law?
>>    - Restrict users' ability to seek and receive information and ideas:
>>    [Yes/No]
>>       - What, if any, restrictions are required by law?
>>       - What, if any, restrictions are greater than the minimum required
>>       by law?
>>
>> 2) What, if any, tools, mechanisms, etc. are provided to allow users to
>> limit their own ability to seek or receive information or ideas? (including
>> mechanisms to filter, prioritize, etc.)
>> ==============
>>
>> I would appreciate your thoughts and comments.
>>
>> bob wyman
>>
>>

Received on Friday, 8 April 2022 00:04:29 UTC