- From: Davi Ottenheimer <davi@inrupt.com>
- Date: Fri, 14 Mar 2025 08:38:19 +0100
- To: Leonard Rosenthol <lrosenth@adobe.com>
- Cc: Dave Raggett <dsr@w3.org>, Joshua Cornejo <josh@marketdata.md>, public-solid <public-solid@w3.org>
- Message-ID: <CAJ8jtzFzF2+9vvCUwJZJHc+v1iUsqg6a=VHvY66GJM9zh6skww@mail.gmail.com>
Hi Dave and Leonard, Thanks for sharing your vision, Dave. The TEE+ESS architecture aligns well with what you've described. Leonard, your point about Content Provenance solutions is particularly relevant. C2PA's tamper-evident packaging of metadata could indeed be a critical component for agent decision-making. Thanks for sharing that you're leading on JPEG Trust (ISO 21617-1). I'll check out the ownership, authorship, and rights to Content Provenance as it seems directly applicable to the agent ecosystem Dave describes. The advertising insight is spot-on. Agent-mediated purchasing will likely transform current models, possibly creating new technical vectors for influence that we should anticipate. Content provenance could help ensure transparency in this new paradigm. Edge computing with E2E encryption, enhanced by TEEs, offers practical security while enabling personalization through fine-tuning. For your "web of agents" concept, a combination of attestation protocols and content provenance could address many of the trust and rights management challenges.. I'm moving onto real-world implementations of your data spaces vision using AI with TEE+ESS approaches integrated with content provenance standards. There's significant opportunity for ESS deployments that could address the EU AI Act requirements while maintaining both privacy and utility. Regards, Davi On Thu, Mar 13, 2025 at 2:56 PM Leonard Rosenthol <lrosenth@adobe.com> wrote: > One thing that is happening with advertising is that they are also > investing in Content Provenance solutions to address not only the need for > AI Transparency & Labelling, but also to carry some of the current metadata > that they work with in a tamper-evident package (e.g., C2PA). With that > type of machine readable information, the Agents discussed here would be > able to make more intelligent decisions….even more so if work in the area > of (micro-)licensing and (micro-)payments pan out. > > > > On that note, Joshua, the grammar sample you posted is actually from work > that I am leading on the 2nd edition of JPEG Trust (ISO 21617-1), where > we are bringing ownership, authorship, and rights to Content Provenance. > > > > Leonard > > > > *From: *Dave Raggett <dsr@w3.org> > *Date: *Thursday, March 13, 2025 at 6:45 AM > *To: *Davi Ottenheimer <davi@inrupt.com> > *Cc: *Joshua Cornejo <josh@marketdata.md>, public-solid < > public-solid@w3.org> > *Subject: *Re: Slides on technical implications for EU AI Act > > *EXTERNAL: Use caution when clicking on links or opening attachments.* > > > > Hi Davi, > > > > Thanks for the feedback. > > > > I envision personal agents that are responsible for gathering and managing > personal data, and for accessing fair and open ecosystems of services (a > web of agents). The latter brings challenges such as discovery, trust, > negotiation, payments, and lightweight vocabularies similar in spirit to > schema.org. > > > > In principle, personal data could be held at the edge and synchronised > across the user’s devices using end to end encryption along with support > for running distilled LLMs at the edge in a trusted execution environment.. > Fine tuning would be needed for personal agents to match their uses. Other > (remote) agents would provide complementary skills, including managing data > spaces. > > > > Another point of interest is the relationship to advertising. If people > get used to using personal agents to buy goods and services, what is the > impact on advertising given that your personal agent will search for > products and services on your behalf. Consumers will be exposed to > influencers, and marketing offers. Personal agents risk being steered to > offer users products from particular brands. > > > > On 13 Mar 2025, at 10:06, Davi Ottenheimer <davi@inrupt.com> wrote: > > > > Dave, > > > > Great deck! I enjoyed your take on the EU AI Act. I've been rolling out a > TEE (Trusted Execution Environment) architecture with Enterprise Solid > Server (ESS) that addresses many of the privacy-transparency challenges you > highlighted here. > > > > Your thought about AI agents' retention of sensitive information > particularly resonated with me as something clearly scoped and within a > near term solution. The TEE approach creates hardware-enforced isolation > between any AI model and host system, while secure independent attestation > verifies integrity of the execution environment. Paired with ESS for > personal data management (directly aligning with your "combination of AI > and SOLID" concept) shows promising results for maintaining confidentiality > without sacrificing verifiability. > > > > The TEE is only one step and doesn't solve every challenge (integrity > error, model bias, etc.) so I also am developing technical controls around > each concern raised in your presentation (defence depth). I believe this > represents where the security industry has been heading and must deliver > now: enforceable technical measures to innovate on policy-based needs. > > > > I'd be happy to provide more technical details on how the TEE architecture > specifically addresses the GDPR/AI Act requirements if you're interested. > This is precisely the kind of practical implementation that could support > the W3C initiatives you mentioned. > > > > Regards, > > Davi > > > > --- > > > > On Thu, Mar 13, 2025, 09:49 Joshua Cornejo <josh@marketdata.md> wrote: > > Hi, > > > > *Slide 6* “Users should be able to determine what resources were used for > training AI systems • Licensing IPR and copyright” > > > > The cawg taxonomy <https://cawg.io/training-and-data-mining/1.1-draft/> is > a proposal to work in conjunction with C2PA and ODRL, e.g.: > > > > { > > "@context": "http://www.w3.org/ns/odrl.jsonld", > > "@type": "Offer", > > "uid": "http://example.com/ex_policy", > > "target": "http://example.com/my_asset", > > "*permission*": [{ > > "assigner": "http://example.com/org:xyz", > > "description": "unrestricted actions", > > "action": "*cawg:ai_training*", "*cawg:ai_inference*" > > }], > > > > "*permission*": [{ > > "assigner": "http://example.com/org:xyz", > > "duty": " http://example.com/example_mining_duty", // ß can > establish an obligation/duty/payment as consequence of executing the > permission > > "description": "data mining requires compensation", > > "action": "*cawg:data_mining*" > > * "constraint": [{ // **ß* *can > refine/constrain aspects of the permission too* > > * "leftOperand": "spatial",* > > * "operator": "isPartOf",* > > * "rightOperand": { "@value": "EU", "@type": "iso3166" }* > > * }]* > > }], > > > > "*prohibition*": [{ > > "assigner": "http://example.com/org:xyz", > > "description": "not allowed", > > "action": "*cawg:ai_generative_training*" > > }] > > } > > > > There are also 2 emerging gaps in protocols/understanding with AI > > > > 1. *Knowledge security*: the typical example is: > > · Alice is the sales manager > > · Bob, Charles and Dave report into Alice > > · Documents from all members are “fed” into an AI system (a > mixture of LLM / RAG or other ‘grounding’ element / chat bots) > > · Alice can see everyone’s documents and access everyone’s > “knowledge” (e.g. ask a chatbot to create a consolidated sales chart from > all the documents) > > · The subordinates can only see their own documents and access > only their knowledge > > > > *Question*: how do you narrow the access > > > > 2. *Access Control*: > > · Your local agent shouldn’t have access to all of your > credentials > > · The agent should have its own non-human identity and a formal > way to be an approved “delegate” > > · Both your local agent and a remote agent (if involved) should > have “just-in-time” access to credentials (for a predetermined period or > attempts to execute) > > <image001.png> > > ___________________________________ > > *Joshua Cornejo* > > *marketdata* <https://www.marketdata.md/> > > smart authorisation management for the AI-era > > > > *From: *Melvin Carvalho <melvincarvalho@gmail.com> > *Date: *Thursday, 13 March 2025 at 05:58 > *To: *public-solid <public-solid@w3.org> > *Subject: *Fwd: Slides on technical implications for EU AI Act > *Resent-From: *<public-solid@w3.org> > *Resent-Date: *Thu, 13 Mar 2025 05:57:22 +0000 > > > > FYI: quite interesting post in cogain which mentions "Personal Agents", > and also some references to Solid. > > > > ---------- Forwarded message --------- > Od: *Dave Raggett* <dsr@w3.org> > Date: út 11. 3. 2025 v 10:40 > Subject: Slides on technical implications for EU AI Act > To: public-cogai <public-cogai@w3.org> > > > > I recently gave a talk commenting on technical implications for the EU AI > Act. > > > > https://www.w3.org/2025/eu-ai-act-raggett.pdf > > > > I cover AI agents and ecosystems of services on slide 8, anticipating the > arrival of personal agents that retain personal information across many > sessions, so that agents can help you with services based upon what the > agent knows about you. This could be implemented using a combination of > retrieval augmented generation and personal databases, e.g. as envisaged by > SOLID. > > > > See: https://www.w3.org/community/solid/ and https://solidproject.org > > > > Personal agents will interact with other agents to fulfil your requests, > e.g. arranging a vacation or booking a doctor’s appointment. This involves > ecosystems of specialist services, along with the means for personal agents > to discover such services, the role of APIs for accessing them, and even > the means to make payments on your behalf. > > > > There are lots of open questions such as: > > > > - Where is the personal data held? > - How much is shared with 3rd parties? > - How to ensure open and fair ecosystems? > > > > My talk doesn’t summarise the AI Act as a colleague covered that. In > short, the AI Act frames AI applications in terms of prohibited > applications, high risk applications and low risk applications, setting out > requirements for the latter two categories. See: > https://artificialintelligenceact.eu/high-level-summary/ > > > > Your thoughts on this are welcomed! > > > > Dave Raggett <dsr@w3.org> > > > > > > > > > This e-mail, and any attachments thereto, is intended only for use by the > addressee(s) named herein and may contain legally privileged, confidential > and/or proprietary information. If you are not the intended recipient of > this e-mail (or the person responsible for delivering this document to the > intended recipient), please do not disseminate, distribute, print or copy > this e-mail, or any attachment thereto. If you have received this e-mail in > error, please respond to the individual sending the message, and > permanently delete the email. > > > > Dave Raggett <dsr@w3.org> > > > > > > > -- This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged, confidential and/or proprietary information. If you are not the intended recipient of this e-mail (or the person responsible for delivering this document to the intended recipient), please do not disseminate, distribute, print or copy this e-mail, or any attachment thereto. If you have received this e-mail in error, please respond to the individual sending the message, and permanently delete the email.
Received on Friday, 14 March 2025 07:40:19 UTC