- From: Mark Lizar <mark@openconsent.com>
- Date: Thu, 13 Feb 2025 14:15:17 +0000
- To: "public-dpvcg@w3.org" <public-dpvcg@w3.org>
- Message-ID: <71E5B140-291E-433E-90C5-BEC04784128D@openconsent.com>
HI, Here is the usacase I put forward, there are some good terms in the bottom, 4.8 Data characteristics 4.8.1 General This field describes the data characteristics that are defined in 4.8.2 to 4.8.7. 4.8.2 Source Origin of data processed by the AI system, e.g. customers, instruments, IoT, web, surveys, commercial activity, simulations, or other sources. 4.8.3 Variety Types of data processed by the AI system, e.g. structured/unstructured text, images, voices, gene sequences, numbers, composite: time-series, graph-structures. This field can also briefly discuss formats, logical models, timescales, and semantics. 4.8.4 Velocity The rate of flow at which the data in the AI system is created, stored, analysed, or visualized. Could be in real time. 4.8.5 Variability Changes in data rate, format/structure, semantics, and/or quality. 4.8.6 Quality Completeness and accuracy of the data with respect to semantic content as well as syntax of the data (such as presence of missing fields or incorrect values). 4.8.7 Protected attributes An attribute by which groups separated by this attribute are required to be equal. For example, gender, race, religion, legally regulated attribute. 4.9 Key performance indicators (KPIs) This field describes the KPIs for evaluating the performance or usefulness of the AI system. 4.10 Features of use case 4.10.1 General This field describes the features and AI characteristics of the use case. 4.10.2 Task(s) The main task of the use case. A pull-down list includes recognition, natural language processing, knowledge processing and discovery, inference, planning, prediction, optimization, interactivity, recommendation and others. 4.10.3 Level of automation The level of automation of AI systems used in this use case. AI systems can be compared based on the degree of automation and whether they are subject to external control. Autonomy is at one end of a spectrum and a fully human controlled system on the other, with degrees of heteronomy in between. The level of automation includes the following options: — full automation: The system can modify its operating domain or its goals without external intervention, control or oversight. — high automation: The system can perform its entire mission without external intervention. — conditional automation: The system performs parts of its mission without external intervention. — partial automation: Sustained and specific performance by a system, with an external agent being ready to take over when necessary. — assistance: Some sub-functions of the system are fully automated while the system remains under the control of an external agent. — no automation: The system assists an operator. See ISO/IEC DIS 22989, 5.12 [6] for more details on the levels of automation. 4.10.4 Method(s) AI method(s), model(s) or framework(s) used in development. 4.10.5 Platform Platform (includes hardware system) used in development and deployment. 4.10.6 Topology Topology of the deployment network architecture. 4.11 Threats and vulnerabilities This field describes threats and vulnerabilities relevant to the use case, such as unwanted bias, incorrect AI system use, security threats, challenges to accountability and privacy threats (hidden patterns). 4.12 Challenges and issues Descriptions of challenges and issues of the use case. 4.13 Trustworthiness considerations 4.13.1 General AI system trustworthiness can be considered from several perspectives. This field is used to describe how the use case addresses trustworthiness elements including bias mitigation, ethical and societal concerns, explainability, controllability, predictability, transparency, verification, robustness, reliability, and resilience. 4.13.2 Bias mitigation ISO/IEC TR 24027:2021 defines bias as systematic difference in treatment of certain objects, people, or groups in comparison to others. In this part of the trustworthiness field, the use case can describe how biases such as human cognitive bias, confirmation, data bias and statistical bias are detected and mitigated in the AI system. The use case can also discuss how the organization has approached bias goals and challenges. See ISO/IEC TR 24027:2021 for further information. 4.13.3 Ethical and societal concerns In this part of the trustworthiness field, the use case can describe how societal and ethical concerns related to the AI system are understood, identified, controlled and mitigated. Current or future measures to address potential ethical and societal risks can also be described, along with protected attributes. Societal concerns might be a factor when an organization is choosing or recommending an AI technology. Taking context, scope, nature and risks into consideration can mitigate undesirable societal outcomes. In the absence of such considerations, the technology itself could perform flawlessly from a technical perspective but have undesirable social or ethical impacts. AI ethics is one important aspect of societal concerns that addresses the ethical issues arising from the use of AI systems. AI ethics are being considered in various countries and organizations in the form of principles, guidelines, or regulations that ethical AI can follow [8][9][10][11]. AI ethics and ethical risks is based on the four ethical principles of trustworthy AI of EU HLEG [11]; — respect for human autonomy; Mark Lizar mark@openconsent.com On 12 Feb 2025, at 03:18, Harshvardhan J. Pandit (W3C Calendar) <noreply+calendar@w3.org> wrote: View this event in your browser<https://www.w3.org/events/meetings/178d1c71-a92d-4da7-a196-6a89d0fe2277/20250213T133000/> DPVCG Meeting 13 FEB 2025 Upcoming Confirmed 13 February 2025, 13:30 -14:30 Europe/Dublin Event is recurring Weekly on Thursday, starting from 16 January 2025, until 30 August 2025 Data Privacy Vocabularies and Controls Community Group <https://www.w3.org/groups/cg/dpvcg/calendar/> This is the weekly meeting for the DPVCG Agenda Agenda<https://www.w3.org/events/meetings/178d1c71-a92d-4da7-a196-6a89d0fe2277/20250213T133000/> Previous minutes: https://w3id.org/dpv/meetings/meeting-2025-02-06 * v2.1 release candidate with open review until FEB-16; see https://lists.w3.org/Archives/Public/public-dpvcg/2025Feb/0002.html and https://github.com/w3c/dpv/issues/235 for hotfixes * v2.2 roadmap; see https://github.com/w3c/dpv/milestone/7 * AI: add concepts represent AI training as purpose/processing + permitted/prohibited https://github.com/w3c/dpv/issues/82 * LOC extension: add subjective concepts (e.g. AtHome) https://github.com/w3c/dpv/issues/209 * LOC extension: add inversive concepts (e.g. NonEU) https://github.com/w3c/dpv/issues/208 * Modelling existing data taxonomies e.g. IAB, Google in PD extension (scope discussion) * (new issue) Add UnstructuredData and UncategorisedData concepts/properties https://github.com/w3c/dpv/issues/240 * AOB Mailing list updates * Workshop on ODRL and Beyond: Practical Applications and Challenges for Policy-Based Access and Usage Control (OPAL 2025) https://lists.w3.org/Archives/Public/public-dpvcg/2025Feb/0000.html * Solid Symposium 2025 - Call for Contributions to the Privacy & Personal Data Management Session https://lists.w3.org/Archives/Public/public-dpvcg/2025Feb/0001.html * User study: Automating Android privacy assessments using DPV https://lists.w3.org/Archives/Public/public-dpvcg/2025Feb/0006.html Joining Instructions Instructions are restricted to meeting participants. You need to log in<https://auth.w3.org/?url=https%3A%2F%2Fwww.w3.org%2Fevents%2Fmeetings%2F178d1c71-a92d-4da7-a196-6a89d0fe2277%2F%3FrecurrenceId%3D20250213T133000> to see them. Participants Groups * Data Privacy Vocabularies and Controls Community Group<https://www.w3.org/groups/cg/dpvcg/> (View Calendar<https://www.w3.org/groups/cg/dpvcg/calendar/>) Report feedback and issues on GitHub<https://github.com/w3c/calendar>. <event.ics>
Attachments
- application/vnd.openxmlformats-officedocument.wordprocessingml.document attachment: ISO-IEC JTC 1-SC 27-WG 5_ISO-IEC JTC 1-SC 42-WG 4_N1648_Generative AI use case from IN expert - Digital Twin Transparency-Draft-v0.9.docx
Received on Thursday, 13 February 2025 14:15:28 UTC