- From: carl mattocks <carlmattocks@gmail.com>
- Date: Mon, 10 Aug 2020 17:11:17 -0400
- To: W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CAHtonumSsqOoNjtuWLbibJrTsCGig+nm_ByMdiy9+kXDwbi_bw@mail.gmail.com>
Below is extract from a 136 page document focused on Explaining processes, services and decisions delivered or assisted by AI. Unsurprisingly it has a strong emphasis on contextual factors.. which others have used to 'explain' bias.. e.g. A measurement *bias* due to the influence of the study's *context* on the interpretation of the study's results a sample extract is .. What are the contextual factors? At a glance Five contextual factors have an effect on the purpose an individual wishes to use an explanation for, and on how you should deliver your explanation: domain you work in; impact on the individual; data used; urgency of the decision; and audience it is being presented to. In more detail Introduction Domain factor Impact factor Data factor Urgency factor Audience factor Introduction When constructing an explanation for an individual, there are several factors about the context an AI-assisted decision is made in. These have an effect on the type of explanation which people will find useful and the purposes they wish to use it for .. This co-badged guidance by the ICO and The Alan Turing Institute aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them. this is the website https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/explaining-decisions-made-with-artificial-intelligence/ this is the whole document https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-data-protection-themes/explaining-decisions-made-with-artificial-intelligence-1-0.pdf enjoy carl It was a pleasure to clarify
Received on Monday, 10 August 2020 21:12:06 UTC