Re: Article 19 compliance of Credibility and Content Moderation systems

I think Bob’s point is that the tool that is being checked should not be making matters worse in that respect. If RT is not banned by law in a country, the tool shouldn’t be indiscriminately preventing access to it without user approval.

Bob’s point that allowing users to restrict their own receipt of information is not in conflict with Article 19 is important. It comes down to whether the tool’s functions are clear to the user. We wouldn’t want to give high marks to a tool that offers to tag content as questionable but quietly filters out legal (but disinforming) content, even if we like the feature of filtering out RT’s disinformation. It comes down to user control and freedom, which should be covered in the UI section.
-Annette

> On Apr 7, 2022, at 3:45 PM, Adeel <aahmad1811@gmail.com> wrote:
> 
> Hello,
> 
> What would you restrict access to if you are already working in a filtered echo chamber, there won't be much diversity of opinions to seek credibility on?
> 
> e.g ukraine-russia coverage - the west bans media outlets that don't agree with their narrative, e.g like RT, so now you only have a filtered view of the conflict as every other mainstream media outlet is then covering and amplifying it the same way with the same narrative, not much left to restrict, if your choice is already restricted.
> 
> This is is a perfect example of freedom of speech and opinion that is a facade, irrespective of frontiers. 
> 
> Thanks,
> 
> Adeel
> 
> On Thu, 7 Apr 2022 at 21:14, Bob Wyman <bob@wyman.us <mailto:bob@wyman.us>> wrote:
> As discussed during yesterday's meeting, I believe that the rubric for evaluating credibility and content moderation systems should include an evaluation of compliance with the requirements of, at least, the Universal Declaration of Rights <https://www.un.org/en/about-us/universal-declaration-of-human-rights>' Article 19, which reads:
> Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.
> 
> Clearly, Credibility or content moderation systems may interfere with the explicitly enumerated Article 19 rights of:
> Freedom to "impart information and ideas" (i.e. Freedom of speech), and
> Freedom to "seek [and] receive" information and ideas.
> Ideally, a system would not abridge one's ability to exercise these rights, or any other rights. Nonetheless, it must be recognized that applicable law or regulation may require some abridgement. For instance, in various jurisdictions, certain kinds of expression are illegal (e.g. child-porn or "disrespect of the monarch"...). Additionally, the ability to freely seek and receive information is restricted by various laws or principles, including those intended to preserve privacy (e.g. by the UDR's Article 12 or the EU's GDPR <https://gdpr-info.eu/>) or national security (e.g. national defense secrets). Providers within any particular market are generally unable to avoid the requirements of law, however, they may vary dramatically in the degree to which their abridgement of rights exceeds that minimum required by law. Thus, the rubric metric that should apply to providers is a measure of the degree with which any abridgement of Article 19 rights exceeds that minimum abridgement which may be required by applicable law, the UDR itself, or of applicable international law.
> 
> I think it important to understand that allowing users to restrict their own receipt or exposure to information is not limited by Article 19. (e.g. I may choose to filter out data or messages that contain what I consider to be obscene words, or that is provided by persons I consider to be uncredible) Thus, it is important to distinguish between restrictions imposed, non-optionally, by systems (aka: censorship), and those which are the result of user's choices (aka: curation).
> 
> Given the considerations above, I suggest that the evaluation rubric include questions which are at least similar to those below:
> 
> ==============
> 1) Does the system's implementation, or the systems' operational and management policies:
> Restrict users' ability to impart information and ideas: [Yes/No]
> What, if any, restrictions are required by law?
> What, if any, restrictions are greater than the minimum required by law?
> Restrict users' ability to seek and receive information and ideas: [Yes/No]
> What, if any, restrictions are required by law?
> What, if any, restrictions are greater than the minimum required by law?
> 2) What, if any, tools, mechanisms, etc. are provided to allow users to limit their own ability to seek or receive information or ideas? (including mechanisms to filter, prioritize, etc.)
> ==============
> 
> I would appreciate your thoughts and comments. 
> 
> bob wyman
> 

Received on Thursday, 7 April 2022 23:56:48 UTC