Re: representational harm

The article says of "Harms of representation":

    It gets tricky when it comes to systems that represent society but
    don’t allocate resources. These are representational harms. When
    systems reinforce the subordination of certain groups along the
    lines of identity like race, class, gender etc.  It is a long-term
    process that affects attitudes and beliefs. It is harder to
    formalize and track. It is a diffused depiction of humans and
    society. It is at the root of all of the other forms of allocative harm.

Representational harm is impossible to track without formalizing and 
recording metrics for the impact upon the relevant stakeholder groups.  
Moreover, to be humanly comprehensible at "Big Data" scale, such metrics 
must be gathered and shared in an open, standard, machine-readable 
format, like StratML Part 2.

In any event, the objectives of Microsoft's FATE group are now among 
seven MS's plans available in StratML format at 
https://stratml.us/drybridge/index.htm#MS or, more specifically, 
https://stratml.us/carmel/iso/FATEwStyle.xml

It would be good if the FATE folks could demonstrate leadership by 
example in reporting their plans and results in machine-readable format.

Hopefully, they must be collaborating with MS's Project Cortex group but 
it would be good if they were to use a data structure like the 
stratml:Relationship elements 
<https://stratml.us/references/oxygen/PerformancePlanOrReport20160216_xsd.htm#Relationship> 
to make the logical connections salient and report progress. 
https://stratml.us/carmel/iso/MSPCwStyle.xml

BTW, as far as government and politics are concerned, I believe we can 
do better than dictatorial majoritarian "representation" -- which, by 
definition, discriminates against minorities and also presents false 
choices for many, if not most purposes. 
https://www.linkedin.com/pulse/transforming-governance-reducing-cost-gofpau-owen-ambur/

Owen

On 3/26/2020 10:45 PM, Paola Di Maio wrote:
> I share this interesting article
> https://hub.packtpub.com/20-lessons-bias-machine-learning-systems-nips-2017/ 
>
>
> In particular, emphasis on 'representational harm' which I think 
> should be imperative
> we address in our work
>
> I ll enter this in Zotero
>
> PDM

Received on Friday, 27 March 2020 03:55:19 UTC