- From: carl mattocks <carlmattocks@gmail.com>
- Date: Fri, 24 Jul 2020 12:45:55 -0400
- To: W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CAHtonu=S=h_Jpeep1nGcfa1eSrB6RcEut+k-f0r0GubnJXN0jw@mail.gmail.com>
Chris, et al Added a new role 'Human-in-the-Loop Controls' .. intent is to deploy it for risk mitigation, evaluate algorithms and address bias etc.. added note: Established human-in-the-loop workflows in acknowledgment of "Checkbox guidelines must not be the only “instruments” of AI ethics. A transition is required from a more deontologically oriented, action-restricting ethic based on universal abidance of principles and rules, to a situation-sensitive ethical approach based on virtues and personality dispositions, knowledge expansions, responsible autonomy and freedom of action " https://link.springer.com/content/pdf/10.1007/s11023-020-09517-8.pdf cheers carl It was a pleasure to clarify On Fri, Jul 24, 2020 at 10:09 AM Chris Fox <chris@chriscfox.com> wrote: > Paola wrote > >> Having taken a look at the plan so far, it looks that some of the goals >> are not clearly state, for example - risk - should the goal be risk >> aversion? >> >> If you look at the document you will see the goal is fully defined as: > > Risks > *Goal Statement:* Identify and mitigate risks and known threats > > > Kind Regards, > Chris >
Received on Friday, 24 July 2020 16:46:44 UTC