Re: The Automated Planning and Scheduling Community Group

As a mathematician I don't make leaps of faith, I may act upon hunches, but rigour is what defines mathematics.The problem is that the European Union introduced the concept of explainable AI, and to this date it hasn't clearly defined its meaning. In any case the audience won't be the general public, and again the European Union falls short on explaining for whom this explainability is intended.

As far as robustness and trustworthiness of AI is concerned, this appears to be much clearer.
But with regard to the proposed regulatory framework and the Artificial Intelligence Act of the European Union in particular, it is far from perfect.
And the question of ethical AI and a clear cut definition thereof, there is still much work to be done before it takes shape in regulatory and other frameworks.
As an individual who factors global sustainable development and human rights principles into all personal and professional research and innovation activities, I must use a helicopter view of impacts of technologies.
The problem with future and emerging technologies is that most of the time industry and politicians make the leaps of faith.
To recapitulate. Openness and inclusiveness in AI are borrowed terms from the UN and refer to the general principle of equal access to the benefits, industry specific application, development and general use of the technologies involved in AI.
Explainability is still not well defined as is the case for ethical use of AI as well.
Robustness and trustworthiness are much better defined.
Some useful links:
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

https://digital-strategy.ec.europa.eu/en/policies

Robustness and explainability of Artificial Intelligencehttps://publications.jrc.ec.europa.eu/repository/handle/JRC119336
https://techcrunch.com/2021/11/30/eu-ai-act-civil-society-recommendations/

https://www.edf-feph.org/artificial-intelligence-act-needs-to-protect-human-rights/

My proposed layered system, which I intend to flesh out in a white paper will address definitions for open, inclusive, accountable, explainable, robust, trustworthy and ethical AI.
Rigour is what we need indeed, but in all the policies formulated by the European Union for AI, clarity must be achieved, because it is currently lacking in many areas.


Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Sunday, November 20, 2022 at 05:20:30 PM AST, Dave Raggett <dsr@w3.org> wrote:  
 
 

On 20 Nov 2022, at 17:43, ProjectParadigm-ICT-Program <metadataportals@yahoo.com> wrote:
It seems that synchronization between CogAI, AIKR and this new to be created CG is in order.After having run through dozens of threads in the AIKR list, suggestions by Paola di Maio, Owen Ambur and constructive comments of Dave Raggett, Mike Bergman et alii, and recent articles it is becoming crystal clear and manifest that open, explainable, inclusive and ethical AI can only be achieved by combining elements of planning in machine readable format, categorization or classification of areas of activity, using library coding systems and commercial classification codes for economic activities, and classification systems in specific fields (e.g. like exist in mathematics, computer science and some physical sciences) to narrow down in which area of activity AI will be used.

Can you justify that or is it just a leap of faith?  I would appreciate intellectual rigour to ensure arguments are well grounded.
For instance, you need to explain what you mean by the terms you’ve used, e.g. what does it take for AI to be open?  Perhaps you have some ideas for the kind of terms and conditions required for others to reuse an AI model?  
What criteria make something “explainable”?   Good explanations depend on the audience, and a technical explanation is unsatisfactory for non-technical people.
Similar questions applied to “inclusive" and to “ethical" in respect to AI.
It is also big jump to requiring planning and library coding systems.
You may want to take a look at the EU regulatory framework for AI, see: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai which is intended guarantee the safety and fundamental rights of people and businesses when it comes to AI. And, they will strengthen uptake, investment and innovation in AI across the EU. As AI is a fast evolving technology, the proposal has a future-proof approach, allowing rules to adapt to technological change.
More rigour, less leaps of faith ...
Dave Raggett <dsr@w3.org>


  

Received on Monday, 21 November 2022 00:45:23 UTC