- From: Harshvardhan Pandit <me@harshp.com>
- Date: Tue, 24 Oct 2023 17:21:04 +0100
- To: Delaram Golpayegani <delaram.golpayegani@adaptcentre.ie>
- Cc: "public-dpvcg@w3.org" <public-dpvcg@w3.org>
(repeating my reply) I think Optionality is a different concept - it refers to what users can do about processing in terms of control over it. In the description the use of active refers to whether subjects have active control rather than their role being an active participant within the process - which is what you want to model I think and what I used it as in the email. I also don't like the word optionality - it leads to wrong conclusions about choices and ability for subjects to have control and is probably derived from "having options". That list doesn't even have opt-in as an example. It also conflates with necessity, which is about whether the activity is necessary to achieve an objective. If we want to model what "control" subjects have, it can be called e.g. "Subject Controls". - Harsh On 24/10/2023 15:39, Delaram Golpayegani wrote: > FYI, OECD's AI classification framework > <https://www.oecd-ilibrary.org/science-and-technology/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en> has active/passive categories for AI users and impacted stakeholders: > '' > /“Optionality” or “dependence” refers to the degree of choice that users > or impacted stakeholders have on whether or not they are subject to the > effects of an AI system, whether their involvement is active or passive. > Optionality can be understood as the extent to which users can opt out > of “the effects” or “the influence” of the AI system, e.g. by switching > to another AI system and the societal repercussions of doing so, e.g. > for access to healthcare or financial services. This is also referred to > as “switchability” (AI Ethics Impact Group, 2020[8]). It is important to > consider the human aspect or the degree to which they are involved in > developing AI systems and models, of the operation and outputs of the > system, and if humans are “in”, “on” and “out-of-the-loop”. The > following are generally considered to be distinct modes of optionality > in a given AI system: / > / Users cannot opt out of the AI system’s output. / > / Users can opt out of the AI system’s output./ > / Users can challenge or correct the AI system’s output. / > / Users can reverse the AI system’s output ex-post./ > '' -- --- Harshvardhan J. Pandit, Ph.D Assistant Professor ADAPT Centre, Dublin City University https://harshp.com/
Received on Tuesday, 24 October 2023 16:21:12 UTC