- From: Owen Ambur <owen.ambur@verizon.net>
- Date: Fri, 14 Mar 2025 17:21:58 +0000 (UTC)
- To: Dave Raggett <dsr@w3.org>
- Cc: W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <2098080531.1630108.1741972918459@mail.yahoo.com>
Hey, Dave, while your points are well-taken, I agree with ChatGPT's conclusion about them: The key takeaway might be that AI should not be used as an excuse to keep policies and laws messy and unstructured, but rather as a tool to help both standardize and analyze such content. Structured data should be the goal where possible, but AI should also be leveraged to extract meaning where structure is lacking. It prompted me to engage ChatGPT further about the issues of artificial ignorance and why we human beings may prefer not to have good records and record-keeping systems. Owen Amburhttps://www.linkedin.com/in/owenambur/ On Friday, March 14, 2025 at 12:04:13 PM EDT, Dave Raggett <dsr@w3.org> wrote: Hi Owen, At the moment W3C specs are carefully written to deal correctly with the agreed use cases. There are a few conventions in place, e.g. drawing upon agreed usage for terms like SHOULD and MUST, and the distinction between normative and informative content. In future as AI evolves further, I can image AI agents as collaborators in drafting the wording, including the means to test it against the use cases. However, AI demonstrates that computers are getting much much better at drafting and reasoning with unstructured information. We can expect AI to mirror and perhaps even surpass the skills of lawyers at making arguments based upon standards, legal contracts and legislation. Structured information following predefined models, will remain useful in respect to efficient information exchange where precision and speed of processing are more important than flexibility. I find it interesting that even mathematics is a very human language in that the meaning of symbols assumes a common contextualised understanding that is partially expressed in the prose around the mathematical expressions. This was exposed in W3C’s efforts to standardise MathML where one school of thought focused on syntax whilst the other school focused on semantics. Best regards,Dave On 14 Mar 2025, at 15:38, Owen Ambur <owen.ambur@verizon.net> wrote: With the assistance of ChatGPT and Claude. ai, the summary of the act is now available in StratML format at https://stratml.us/drybridge/index.htm#EUAIA along with a transformation of its provisions into a model performance plan for regulated entities, at https://stratml.us/docs/EUAIACIP.xml I have been asserting that we have far too many laws, regs, policies and other forms of "guidance" in unstructured, non-standardized format and to few model performance plans and reports in open, standard, machine-readable format. I asked ChatGPT to address the pros and cons of that assertion, about which it concluded: Your assertion is highly valid in terms of improving efficiency, transparency, and accountability. The lack of structured, standardized formats for laws, policies, and performance reports hampers analysis, automation, and compliance. However, the biggest challenges are institutional inertia, the complexity of legal interpretation, and the resources needed to implement change. The key question is not whether we should move toward structured, machine-readable formats—but how we can get governments and institutions to prioritize and implement them effectively. That remains the primary bottleneck in achieving the vision of a worldwide web of intentions, stakeholders, and results. https://chatgpt.com/share/67d44cd3-6788-800b-9309-be5b896dda4b Wouldn't it be nice if the W3C and its groups proved capable to enlightened leadership in that regard. Owen Amburhttps://www.linkedin.com/in/owenambur/ On Friday, March 14, 2025 at 03:48:37 AM EDT, Paola Di Maio <paoladimaio10@gmail.com> wrote: Milton, thanks for replyFrom this summary:https://artificialintelligenceact.eu/high-level-summary/Then compare with the AI project EU is funding There is a lot going on behind the scene especially because the EC is now using AIto evaluate assess the funding of AI technology in all projects, but evaluators do not have the technical competence tounderstand when the act is being breached. People who have flagged issues are simply not hired and cast out from theecosystem There is systemic deviation taking place, Please share which channels are available to to researchers, evaluators and users of EC fundedprojects and how can the public raise concerns when the Act is being breached, especiallyif it is breached within EC funded activities I ll be happy to publish a advisory to inform the public what recourse they havewhen they have evidence of the act being breached PDM On Fri, Mar 14, 2025 at 2:09 AM Milton Ponson <rwiciamsd@gmail.com> wrote: Dear Paola, Can you be more explicit about what you mean with prohibited systems? I have no problems in exposing myself as a mathematician or computer scientist and will ask the necessary questions through appropriate channels. And thanks Dave for this excellent summary. And regarding concerns, see the following articles: https://www.theregister.com/2025/03/13/ai_models_hallucinate_and_doctors/ https://www.theregister.com/2025/03/12/cisa_staff_layoffs/ https://www.theregister.com/2025/03/06/schmidt_ai_superintelligence/ https://www.theregister.com/2025/03/05/dod_taps_scale_to_bring/ https://www.reuters.com/technology/artificial-intelligence/trump-revokes-biden-executive-order-addressing-ai-risks-2025-01-21/ I can mention dozens of other articles from tech industry blogs and websites. The problem in the USA right now is that detractors, opponents and critics of the current wave of AI development are ostracized, boycotted, censored, ridiculed and fired from their jobs and with all AI regulation out the door, only the EU and ironically also China remain to develop AI along the lines of the EU AI Act. The EU AI Act is far from perfect, but right now it is the only regulatory framework left standing. The open questions on page 4 of Dave's slides basically show the new path to follow. And it is this fragment that actually tells us where to look: We need a middle ground that deals with symbolic everyday knowledge that is uncertain, imprecise, context sensitive, incomplete, inconsistent and changing This can be modeled mathematically. Milton PonsonRainbow Warriors Core FoundationCIAMSD Institute-ICT4D Program+2977459312PO Box 1154, OranjestadAruba, Dutch Caribbean On Thu, Mar 13, 2025 at 12:06 AM Paola Di Maio <paola.dimaio@gmail.com> wrote: Greetings DaveThanks for sharing these slides, I am sharing them with the AI KR CG as they are relevant to our group I have several concerns that I am not sure how to address, maybe you have suggestions? Topmost concern is:The EU is funding AI projects that develop/support/include the Prohibited systemsThey do so because highly skilled proponents mask the terminology/concept and fragementing the system design/logicFundamentally, what many of the EU fu systems do is not explicit, and what is explicit is not what the systems do This is apparent to me because I am a systems engineer, but it may not be apparent to the Commission, evaluators, projects officerswho systematically cover up logical inconsistencies I am not sure how to flag this without putting myself more at risk than I am already :-)Advice? PDM On Tue, Mar 11, 2025 at 5:40 PM Dave Raggett <dsr@w3.org> wrote: I recently gave a talk commenting on technical implications for the EU AI Act. https://www.w3.org/2025/eu-ai-act-raggett.pdf I cover AI agents and ecosystems of services on slide 8, anticipating the arrival of personal agents that retain personal information across many sessions, so that agents can help you with services based upon what the agent knows about you. This could be implemented using a combination of retrieval augmented generation and personal databases, e.g. as envisaged by SOLID. See: https://www.w3.org/community/solid/ and https://solidproject.org Personal agents will interact with other agents to fulfil your requests, e.g. arranging a vacation or booking a doctor’s appointment. This involves ecosystems of specialist services, along with the means for personal agents to discover such services, the role of APIs for accessing them, and even the means to make payments on your behalf. There are lots of open questions such as: - Where is the personal data held? - How much is shared with 3rd parties? - How to ensure open and fair ecosystems? My talk doesn’t summarise the AI Act as a colleague covered that. In short, the AI Act frames AI applications in terms of prohibited applications, high risk applications and low risk applications, setting out requirements for the latter two categories. See: https://artificialintelligenceact.eu/high-level-summary/ Your thoughts on this are welcomed! Dave Raggett <dsr@w3.org> Dave Raggett <dsr@w3.org>
Received on Friday, 14 March 2025 17:22:08 UTC