Re: Slides on technical implications for EU AI Act

Hi,

 

Slide 6 “Users should be able to determine what resources were used for training AI systems • Licensing IPR and copyright”

 

The cawg taxonomy is a proposal to work in conjunction with C2PA and ODRL, e.g.:

 

{

   "@context": "http://www.w3.org/ns/odrl.jsonld",

   "@type": "Offer",

   "uid": "http://example.com/ex_policy",

   "target": "http://example.com/my_asset",

   "permission": [{

       "assigner": "http://example.com/org:xyz",

       "description": "unrestricted actions",

       "action": "cawg:ai_training", "cawg:ai_inference"

   }],

 

   "permission": [{

       "assigner": "http://example.com/org:xyz",

       "duty": " http://example.com/example_mining_duty", // ß can establish an obligation/duty/payment as consequence of executing the permission

       "description": "data mining requires compensation",

       "action": "cawg:data_mining"

       "constraint": [{                                   // ß can refine/constrain aspects of the permission too

           "leftOperand": "spatial",

           "operator": "isPartOf",

           "rightOperand": { "@value": "EU", "@type": "iso3166" }

       }]

   }],

 

   "prohibition": [{

       "assigner": "http://example.com/org:xyz",

       "description": "not allowed",

       "action": "cawg:ai_generative_training"

   }]

}

 

There are also 2 emerging gaps in protocols/understanding with AI

 
Knowledge security: the typical example is:
Alice is the sales manager
Bob, Charles and Dave report into Alice
Documents from all members are “fed” into an AI system (a mixture of LLM / RAG or other ‘grounding’ element / chat bots)
Alice can see everyone’s documents and access everyone’s “knowledge” (e.g. ask a chatbot to create a consolidated sales chart from all the documents)
The subordinates can only see their own documents and access only their knowledge
 

Question: how do you narrow the access 

 
Access Control: 
Your local agent shouldn’t have access to all of your credentials
The agent should have its own non-human identity and a formal way to be an approved “delegate”
Both your local agent and a remote agent (if involved) should have “just-in-time” access to credentials (for a predetermined period or attempts to execute)
___________________________________

Joshua Cornejo

marketdata

smart authorisation management for the AI-era

 

From: Melvin Carvalho <melvincarvalho@gmail.com>
Date: Thursday, 13 March 2025 at 05:58
To: public-solid <public-solid@w3.org>
Subject: Fwd: Slides on technical implications for EU AI Act
Resent-From: <public-solid@w3.org>
Resent-Date: Thu, 13 Mar 2025 05:57:22 +0000

 

FYI: quite interesting post in cogain which mentions "Personal Agents", and also some references to Solid.

 

---------- Forwarded message ---------
Od: Dave Raggett <dsr@w3.org>
Date: út 11. 3. 2025 v 10:40
Subject: Slides on technical implications for EU AI Act
To: public-cogai <public-cogai@w3.org>

 

I recently gave a talk commenting on technical implications for the EU AI Act.

 

https://www.w3.org/2025/eu-ai-act-raggett.pdf

 

I cover AI agents and ecosystems of services on slide 8, anticipating the arrival of personal agents that retain personal information across many sessions, so that agents can help you with services based upon what the agent knows about you.  This could be implemented using a combination of retrieval augmented generation and personal databases, e.g. as envisaged by SOLID.

 

See: https://www.w3.org/community/solid/ and https://solidproject.org

 

Personal agents will interact with other agents to fulfil your requests, e.g. arranging a vacation or booking a doctor’s appointment.  This involves ecosystems of specialist services, along with the means for personal agents to discover such services, the role of APIs for accessing them, and even the means to make payments on your behalf.

 

There are lots of open questions such as:

 

Where is the personal data held?
How much is shared with 3rd parties?
How to ensure open and fair ecosystems?
 

My talk doesn’t summarise the AI Act as a colleague covered that. In short, the AI Act frames AI applications in terms of prohibited applications, high risk applications and low risk applications, setting out requirements for the latter two categories. See: https://artificialintelligenceact.eu/high-level-summary/

 

Your thoughts on this are welcomed!

 

Dave Raggett <dsr@w3.org>

 

 

 

Received on Thursday, 13 March 2025 08:47:52 UTC