Re: Slides on technical implications for EU AI Act

With the assistance of ChatGPT and Claude. ai, the summary of the act is now available in StratML format at https://stratml.us/drybridge/index.htm#EUAIA along with a transformation of its provisions into a model performance plan for regulated entities, at https://stratml.us/docs/EUAIACIP.xml
I have been asserting that we have far too many laws, regs, policies and other forms of "guidance" in unstructured, non-standardized format and to few model performance plans and reports in open, standard, machine-readable format.

I asked ChatGPT to address the pros and cons of that assertion, about which it concluded:

Your assertion is highly valid in terms of improving efficiency, transparency, and accountability. The lack of structured, standardized formats for laws, policies, and performance reports hampers analysis, automation, and compliance. However, the biggest challenges are institutional inertia, the complexity of legal interpretation, and the resources needed to implement change.
The key question is not whether we should move toward structured, machine-readable formats—but how we can get governments and institutions to prioritize and implement them effectively. That remains the primary bottleneck in achieving the vision of a worldwide web of intentions, stakeholders, and results.

https://chatgpt.com/share/67d44cd3-6788-800b-9309-be5b896dda4b
Wouldn't it be nice if the W3C and its groups proved capable to enlightened leadership in that regard.
Owen Amburhttps://www.linkedin.com/in/owenambur/
 

    On Friday, March 14, 2025 at 03:48:37 AM EDT, Paola Di Maio <paoladimaio10@gmail.com> wrote:   

 Milton, thanks for replyFrom this summary:https://artificialintelligenceact.eu/high-level-summary/Then compare with the AI project EU is funding 

There is a lot going on behind the scene especially because the EC is now using AIto evaluate assess the funding of AI technology in all projects, but evaluators do not have the technical competence tounderstand when the act is being breached.  People who have flagged issues are simply not hired and cast out from theecosystem
There is systemic deviation taking place, 

Please share which channels are available to to researchers, evaluators and users of EC fundedprojects and how can the public raise concerns when the Act is being breached, especiallyif it is breached within EC funded activities 

I ll be happy to publish a advisory to inform the public what recourse they havewhen they have evidence of the act being breached
PDM

On Fri, Mar 14, 2025 at 2:09 AM Milton Ponson <rwiciamsd@gmail.com> wrote:

Dear Paola,
Can you be more explicit about what you mean with prohibited systems?
I have no problems in exposing myself as a mathematician or computer scientist and will ask the necessary questions through appropriate channels.
And thanks Dave for this excellent summary.
And regarding concerns, see the following articles:
https://www.theregister.com/2025/03/13/ai_models_hallucinate_and_doctors/
https://www.theregister.com/2025/03/12/cisa_staff_layoffs/
https://www.theregister.com/2025/03/06/schmidt_ai_superintelligence/
https://www.theregister.com/2025/03/05/dod_taps_scale_to_bring/
https://www.reuters.com/technology/artificial-intelligence/trump-revokes-biden-executive-order-addressing-ai-risks-2025-01-21/
I can mention dozens of other articles from tech industry blogs and websites.
The problem in the USA right now is that detractors, opponents and critics of the current wave of AI development are ostracized, boycotted, censored, ridiculed and fired from their jobs and with all AI regulation out the door, only the EU and ironically also China remain to develop AI along the lines of the EU AI Act.
The EU AI Act is far from perfect, but right now it is the only regulatory framework left standing.
The open questions on page 4 of Dave's slides basically show the new path to follow.
And it is this fragment that actually tells us where to look:
We need a middle ground that deals with symbolic everyday knowledge that is uncertain, imprecise, context sensitive, incomplete, inconsistent and changing

This can be modeled mathematically.
Milton PonsonRainbow Warriors Core FoundationCIAMSD Institute-ICT4D Program+2977459312PO Box 1154, OranjestadAruba, Dutch Caribbean


On Thu, Mar 13, 2025 at 12:06 AM Paola Di Maio <paola.dimaio@gmail.com> wrote:

Greetings DaveThanks for sharing these slides, I am sharing them with the AI KR CG as they are relevant to our group
I have several concerns that I am not sure how to address, maybe you have suggestions?
Topmost concern is:The EU is funding AI projects that develop/support/include the Prohibited  systemsThey do so because highly skilled proponents mask the terminology/concept and fragementing the system design/logicFundamentally, what many of the EU fu systems do is not explicit, and what is explicit is not what the systems do
This is apparent to me because I am a systems engineer, but it may not be apparent to the Commission, evaluators, projects officerswho systematically cover up logical inconsistencies 

I am not sure how to flag this without putting myself more at risk than I am already :-)Advice?
PDM
On Tue, Mar 11, 2025 at 5:40 PM Dave Raggett <dsr@w3.org> wrote:

I recently gave a talk commenting on technical implications for the EU AI Act.
 https://www.w3.org/2025/eu-ai-act-raggett.pdf
I cover AI agents and ecosystems of services on slide 8, anticipating the arrival of personal agents that retain personal information across many sessions, so that agents can help you with services based upon what the agent knows about you.  This could be implemented using a combination of retrieval augmented generation and personal databases, e.g. as envisaged by SOLID.
See: https://www.w3.org/community/solid/ and https://solidproject.org
Personal agents will interact with other agents to fulfil your requests, e.g. arranging a vacation or booking a doctor’s appointment.  This involves ecosystems of specialist services, along with the means for personal agents to discover such services, the role of APIs for accessing them, and even the means to make payments on your behalf.
There are lots of open questions such as:
   
   - Where is the personal data held?
   - How much is shared with 3rd parties?
   - How to ensure open and fair ecosystems?

My talk doesn’t summarise the AI Act as a colleague covered that. In short, the AI Act frames AI applications in terms of prohibited applications, high risk applications and low risk applications, setting out requirements for the latter two categories. See: https://artificialintelligenceact.eu/high-level-summary/
Your thoughts on this are welcomed!

Dave Raggett <dsr@w3.org>





  

Received on Friday, 14 March 2025 15:38:46 UTC