Re: Slides on technical implications for EU AI Act

Hi Davi,

Thanks for the feedback. 

I envision personal agents that are responsible for gathering and managing personal data, and for accessing fair and open ecosystems of services (a web of agents). The latter brings challenges such as discovery, trust, negotiation, payments, and lightweight vocabularies similar in spirit to schema.org.

In principle, personal data could be held at the edge and synchronised across the user’s devices using end to end encryption along with support for running distilled LLMs at the edge in a trusted execution environment. Fine tuning would be needed for personal agents to match their uses.  Other (remote) agents would provide complementary skills, including managing data spaces.

Another point of interest is the relationship to advertising.  If people get used to using personal agents to buy goods and services, what is the impact on advertising given that your personal agent will search for products and services on your behalf. Consumers will be exposed to influencers, and marketing offers. Personal agents risk being steered to offer users products from particular brands.

> On 13 Mar 2025, at 10:06, Davi Ottenheimer <davi@inrupt.com> wrote:
> 
> Dave,
> 
> Great deck! I enjoyed your take on the EU AI Act. I've been rolling out a TEE (Trusted Execution Environment) architecture with Enterprise Solid Server (ESS) that addresses many of the privacy-transparency challenges you highlighted here.
> 
> Your thought about AI agents' retention of sensitive information particularly resonated with me as something clearly scoped and within a near term solution. The TEE approach creates hardware-enforced isolation between any AI model and host system, while secure independent attestation verifies integrity of the execution environment. Paired with ESS for personal data management (directly aligning with your "combination of AI and SOLID" concept) shows promising results for maintaining confidentiality without sacrificing verifiability.
> 
> The TEE is only one step and doesn't solve every challenge (integrity error, model bias, etc.) so I also am developing technical controls around each concern raised in your presentation (defence depth). I believe this represents where the security industry has been heading and must deliver now: enforceable technical measures to innovate on policy-based needs.
> 
> I'd be happy to provide more technical details on how the TEE architecture specifically addresses the GDPR/AI Act requirements if you're interested. This is precisely the kind of practical implementation that could support the W3C initiatives you mentioned.
> 
> Regards,
> Davi
> 
> ---
> 
> 
> On Thu, Mar 13, 2025, 09:49 Joshua Cornejo <josh@marketdata.md <mailto:josh@marketdata.md>> wrote:
>> Hi,
>> 
>>  
>> 
>> Slide 6 “Users should be able to determine what resources were used for training AI systems • Licensing IPR and copyright”
>> 
>>  
>> 
>> The cawg taxonomy <https://cawg.io/training-and-data-mining/1.1-draft/> is a proposal to work in conjunction with C2PA and ODRL, e.g.:
>> 
>>  
>> 
>> {
>> 
>>    "@context": "http://www.w3.org/ns/odrl.jsonld",
>> 
>>    "@type": "Offer",
>> 
>>    "uid": "http://example.com/ex_policy",
>> 
>>    "target": "http://example.com/my_asset",
>> 
>>    "permission": [{
>> 
>>        "assigner": "http://example.com/org:xyz",
>> 
>>        "description": "unrestricted actions",
>> 
>>        "action": "cawg:ai_training", "cawg:ai_inference"
>> 
>>    }],
>> 
>>  
>> 
>>    "permission": [{
>> 
>>        "assigner": "http://example.com/org:xyz",
>> 
>>        "duty": " http://example.com/example_mining_duty", // ß can establish an obligation/duty/payment as consequence of executing the permission
>> 
>>        "description": "data mining requires compensation",
>> 
>>        "action": "cawg:data_mining"
>> 
>>        "constraint": [{                                   // ß can refine/constrain aspects of the permission too
>> 
>>            "leftOperand": "spatial",
>> 
>>            "operator": "isPartOf",
>> 
>>            "rightOperand": { "@value": "EU", "@type": "iso3166" }
>> 
>>        }]
>> 
>>    }],
>> 
>>  
>> 
>>    "prohibition": [{
>> 
>>        "assigner": "http://example.com/org:xyz",
>> 
>>        "description": "not allowed",
>> 
>>        "action": "cawg:ai_generative_training"
>> 
>>    }]
>> 
>> }
>> 
>>  
>> 
>> There are also 2 emerging gaps in protocols/understanding with AI
>> 
>>  
>> 
>> Knowledge security: the typical example is:
>> Alice is the sales manager
>> Bob, Charles and Dave report into Alice
>> Documents from all members are “fed” into an AI system (a mixture of LLM / RAG or other ‘grounding’ element / chat bots)
>> Alice can see everyone’s documents and access everyone’s “knowledge” (e.g. ask a chatbot to create a consolidated sales chart from all the documents)
>> The subordinates can only see their own documents and access only their knowledge
>>  
>> 
>> Question: how do you narrow the access 
>> 
>>  
>> 
>> Access Control: 
>> Your local agent shouldn’t have access to all of your credentials
>> The agent should have its own non-human identity and a formal way to be an approved “delegate”
>> Both your local agent and a remote agent (if involved) should have “just-in-time” access to credentials (for a predetermined period or attempts to execute)
>> <image001.png>
>> 
>> ___________________________________
>> 
>> Joshua Cornejo
>> 
>> marketdata <https://www.marketdata.md/>
>> smart authorisation management for the AI-era
>> 
>>  
>> 
>> From: Melvin Carvalho <melvincarvalho@gmail.com <mailto:melvincarvalho@gmail.com>>
>> Date: Thursday, 13 March 2025 at 05:58
>> To: public-solid <public-solid@w3.org <mailto:public-solid@w3.org>>
>> Subject: Fwd: Slides on technical implications for EU AI Act
>> Resent-From: <public-solid@w3.org <mailto:public-solid@w3.org>>
>> Resent-Date: Thu, 13 Mar 2025 05:57:22 +0000
>> 
>>  
>> 
>> FYI: quite interesting post in cogain which mentions "Personal Agents", and also some references to Solid.
>> 
>>  
>> 
>> ---------- Forwarded message ---------
>> Od: Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>>
>> Date: út 11. 3. 2025 v 10:40
>> Subject: Slides on technical implications for EU AI Act
>> To: public-cogai <public-cogai@w3.org <mailto:public-cogai@w3.org>>
>> 
>>  
>> 
>> I recently gave a talk commenting on technical implications for the EU AI Act.
>> 
>>  
>> 
>> https://www.w3.org/2025/eu-ai-act-raggett.pdf
>> 
>>  
>> 
>> I cover AI agents and ecosystems of services on slide 8, anticipating the arrival of personal agents that retain personal information across many sessions, so that agents can help you with services based upon what the agent knows about you.  This could be implemented using a combination of retrieval augmented generation and personal databases, e.g. as envisaged by SOLID.
>> 
>>  
>> 
>> See: https://www.w3.org/community/solid/ and https://solidproject.org <https://solidproject.org/>
>>  
>> 
>> Personal agents will interact with other agents to fulfil your requests, e.g. arranging a vacation or booking a doctor’s appointment.  This involves ecosystems of specialist services, along with the means for personal agents to discover such services, the role of APIs for accessing them, and even the means to make payments on your behalf.
>> 
>>  
>> 
>> There are lots of open questions such as:
>> 
>>  
>> 
>> Where is the personal data held?
>> How much is shared with 3rd parties?
>> How to ensure open and fair ecosystems?
>>  
>> 
>> My talk doesn’t summarise the AI Act as a colleague covered that. In short, the AI Act frames AI applications in terms of prohibited applications, high risk applications and low risk applications, setting out requirements for the latter two categories. See: https://artificialintelligenceact.eu/high-level-summary/
>> 
>>  
>> 
>> Your thoughts on this are welcomed!
>> 
>>  
>> 
>> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>>
>> 
>>  
>> 
>>  
>> 
>>  
>> 
> 
> This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged, confidential and/or proprietary information. If you are not the intended recipient of this e-mail (or the person responsible for delivering this document to the intended recipient), please do not disseminate, distribute, print or copy this e-mail, or any attachment thereto. If you have received this e-mail in error, please respond to the individual sending the message, and permanently delete the email.

Dave Raggett <dsr@w3.org>

Received on Thursday, 13 March 2025 10:44:52 UTC