- From: Milton Ponson <rwiciamsd@gmail.com>
- Date: Thu, 13 Mar 2025 14:09:16 -0400
- To: paoladimaio10@googlemail.com
- Cc: W3C AIKR CG <public-aikr@w3.org>, public-cogai <public-cogai@w3.org>
- Message-ID: <CA+L6P4yuc97SjBQoM4hZBQxskS_UYBsyTztoFrnL+n5ee-GuvA@mail.gmail.com>
Dear Paola, Can you be more explicit about what you mean with prohibited systems? I have no problems in exposing myself as a mathematician or computer scientist and will ask the necessary questions through appropriate channels. And thanks Dave for this excellent summary. And regarding concerns, see the following articles: https://www.theregister.com/2025/03/13/ai_models_hallucinate_and_doctors/ https://www.theregister.com/2025/03/12/cisa_staff_layoffs/ https://www.theregister.com/2025/03/06/schmidt_ai_superintelligence/ https://www.theregister.com/2025/03/05/dod_taps_scale_to_bring/ https://www.reuters.com/technology/artificial-intelligence/trump-revokes-biden-executive-order-addressing-ai-risks-2025-01-21/ I can mention dozens of other articles from tech industry blogs and websites. The problem in the USA right now is that detractors, opponents and critics of the current wave of AI development are ostracized, boycotted, censored, ridiculed and fired from their jobs and with all AI regulation out the door, only the EU and ironically also China remain to develop AI along the lines of the EU AI Act. The EU AI Act is far from perfect, but right now it is the only regulatory framework left standing. The open questions on page 4 of Dave's slides basically show the new path to follow. And it is this fragment that actually tells us where to look: We need a middle ground that deals with symbolic everyday knowledge that is uncertain, imprecise, context sensitive, incomplete, inconsistent and changing This can be modeled mathematically. Milton Ponson Rainbow Warriors Core Foundation CIAMSD Institute-ICT4D Program +2977459312 PO Box 1154, Oranjestad Aruba, Dutch Caribbean On Thu, Mar 13, 2025 at 12:06 AM Paola Di Maio <paola.dimaio@gmail.com> wrote: > Greetings Dave > Thanks for sharing these slides, I am sharing them with the AI KR CG as > they are relevant to our group > > I have several concerns that I am not sure how to address, maybe you have > suggestions? > > Topmost concern is: > The EU is funding AI projects that develop/support/include the Prohibited > systems > They do so because highly skilled proponents mask the terminology/concept > and fragementing the system design/logic > Fundamentally, what many of the EU fu systems do is not explicit, and what > is explicit is not what the systems do > > This is apparent to me because I am a systems engineer, but it may not be > apparent to the Commission, evaluators, projects officers > who systematically cover up logical inconsistencies > > I am not sure how to flag this without putting myself more at risk than I > am already :-) > Advice? > > PDM > > On Tue, Mar 11, 2025 at 5:40 PM Dave Raggett <dsr@w3.org> wrote: > >> I recently gave a talk commenting on technical implications for the EU AI >> Act. >> >> https://www.w3.org/2025/eu-ai-act-raggett.pdf >> >> I cover AI agents and ecosystems of services on slide 8, anticipating the >> arrival of personal agents that retain personal information across many >> sessions, so that agents can help you with services based upon what the >> agent knows about you. This could be implemented using a combination of >> retrieval augmented generation and personal databases, e.g. as envisaged by >> SOLID. >> >> See: https://www.w3.org/community/solid/ and https://solidproject.org >> >> Personal agents will interact with other agents to fulfil your requests, >> e.g. arranging a vacation or booking a doctor’s appointment. This involves >> ecosystems of specialist services, along with the means for personal agents >> to discover such services, the role of APIs for accessing them, and even >> the means to make payments on your behalf. >> >> There are lots of open questions such as: >> >> >> - Where is the personal data held? >> - How much is shared with 3rd parties? >> - How to ensure open and fair ecosystems? >> >> >> My talk doesn’t summarise the AI Act as a colleague covered that. In >> short, the AI Act frames AI applications in terms of prohibited >> applications, high risk applications and low risk applications, setting out >> requirements for the latter two categories. See: >> https://artificialintelligenceact.eu/high-level-summary/ >> >> Your thoughts on this are welcomed! >> >> Dave Raggett <dsr@w3.org> >> >> >> >>
Received on Thursday, 13 March 2025 18:09:34 UTC