Re: The Automated Planning and Scheduling Community Group

Hello,

As interesting to you, a Wikipedia article and three publications on Semantic Scholar on these topics:

[1] https://en.wikipedia.org/wiki/Machine_ethics
[2] https://www.semanticscholar.org/paper/Moral-Permissibility-of-Action-Plans-Lindner-Mattm%C3%BCller/34bc0f69725f31535852cf4b961e86fbe7120918
[3] https://www.semanticscholar.org/paper/Generating-preferred-plans-with-ethical-features-Jedwabny-Bisquert/4ef84c2385557ef577eec0dc68c3e0ee3e32a5bc
[4] https://www.semanticscholar.org/paper/Artificial-Intelligence%2C-Values-and-Alignment-Gabriel/7aa70e2c12c8ba2dcc828893adb8bb56e3766726


Best regards,
Adam

________________________________
From: Adeel <aahmad1811@gmail.com>
Sent: Sunday, November 20, 2022 9:56 PM
To: ProjectParadigm-ICT-Program <metadataportals@yahoo.com>
Cc: Dave Raggett <dsr@w3.org>; public-aikr@w3.org <public-aikr@w3.org>; Adam Sobieski <adamsobieski@hotmail.com>; Owen Ambur <owen.ambur@verizon.net>; chris@chriscfox.com <chris@chriscfox.com>; Naval Sarda <nsarda@epicomm.net>; pradeep.jain@ictect.com <pradeep.jain@ictect.com>; William Glascoe III <eosocxo@comcast.net>; chet.ensign@oasis-open.org <chet.ensign@oasis-open.org>; Gayanthika Udeshani <gayaudeshani@gmail.com>; Jeff Maynard <jmaynard@turnkey.com.au>; Kurt Conrad <conrad@sagebrushgroup.com>
Subject: Re: The Automated Planning and Scheduling Community Group

Hello,


Why is there so much stress on ethics but a total lack of effort on morality of AI?


A lawyer that is fighting a case for a rapist is likely to be acting ethically in

defending their right to a fair trial, but at same time morally incorrect to defend a rapist.


One could define a code of ethics that states thou shall kill, the AI is correctly following such ethics of conduct, even though that it is morally reprehensible.


AI has no self-awareness between why something is right or wrong. It also has no sense of guilt or empathy. But, if you were to include a partial sense of suppressed emotions, via reverse engineering, you can transform them into psychopaths.

An automated car could run over a pedestrian and realize only after the fact, what it did was wrong and have no measure of consequences of punishment - the criminal courts will not banish the car to life imprisonment or hold it liable.

Death by a robot is not prescribed in law. And, who defines this AI ethics? A human? Humans have biases and are generally unethical. Ethicist tend to be the most unethical people.

Because, even after knowing that something is wrong they will still do it as they can get away with it.


1) Morals are the basis of ethics?

2) Does ethics in AI, intrinsically have a universal equivalence? i.e something that is codified as ethical in west may not be sufficiently compatible for the east. What attributes in ethics form for-all-there-exists vs for-some-there-exists as an existential quantification?

3) In order for AI ethics to be robust it will have to be codified so it can mutate at least by some way of natural computation, as defined by the environment and changing norms of society. In so doing, allowing the AI agent to question the ethical and moral dilemmas for/against humans?

4) One also then has to look at influencer cases between agent and agent interaction via game theory, policy gradients, mediation, and consensus?

5) there has to be some agreed gold standard of measure on what is ethical and what is moral?


Generally speaking, a moral person will want to do the right thing with an impulse that drives the right intentions. Morals define our principles. Ethics tend to be more practical, codified as set of rules to define actions and behaviors.

But, the two are not aligned in all cases. Ethics are not always moral. And, a moral action can be unethical.


Thanks,


Adeel

On Mon, 21 Nov 2022 at 00:45, ProjectParadigm-ICT-Program <metadataportals@yahoo.com<mailto:metadataportals@yahoo.com>> wrote:
As a mathematician I don't make leaps of faith, I may act upon hunches, but rigour is what defines mathematics.
The problem is that the European Union introduced the concept of explainable AI, and to this date it hasn't clearly defined its meaning. In any case the audience won't be the general public, and again the European Union falls short on explaining for whom this explainability is intended.

As far as robustness and trustworthiness of AI is concerned, this appears to be much clearer.

But with regard to the proposed regulatory framework and the Artificial Intelligence Act of the European Union in particular, it is far from perfect.

And the question of ethical AI and a clear cut definition thereof, there is still much work to be done before it takes shape in regulatory and other frameworks.

As an individual who factors global sustainable development and human rights principles into all personal and professional research and innovation activities, I must use a helicopter view of impacts of technologies.

The problem with future and emerging technologies is that most of the time industry and politicians make the leaps of faith.

To recapitulate. Openness and inclusiveness in AI are borrowed terms from the UN and refer to the general principle of equal access to the benefits, industry specific application, development and general use of the technologies involved in AI.

Explainability is still not well defined as is the case for ethical use of AI as well.

Robustness and trustworthiness are much better defined.

Some useful links:

https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

https://digital-strategy.ec.europa.eu/en/policies

Robustness and explainability of Artificial Intelligence
https://publications.jrc.ec.europa.eu/repository/handle/JRC119336

https://techcrunch.com/2021/11/30/eu-ai-act-civil-society-recommendations/


https://www.edf-feph.org/artificial-intelligence-act-needs-to-protect-human-rights/


My proposed layered system, which I intend to flesh out in a white paper will address definitions for open, inclusive, accountable, explainable, robust, trustworthy and ethical AI.

Rigour is what we need indeed, but in all the policies formulated by the European Union for AI, clarity must be achieved, because it is currently lacking in many areas.


Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development


On Sunday, November 20, 2022 at 05:20:30 PM AST, Dave Raggett <dsr@w3.org<mailto:dsr@w3.org>> wrote:



On 20 Nov 2022, at 17:43, ProjectParadigm-ICT-Program <metadataportals@yahoo.com<mailto:metadataportals@yahoo.com>> wrote:

It seems that synchronization between CogAI, AIKR and this new to be created CG is in order.
After having run through dozens of threads in the AIKR list, suggestions by Paola di Maio, Owen Ambur and constructive comments of Dave Raggett, Mike Bergman et alii, and recent articles it is becoming crystal clear and manifest that open, explainable, inclusive and ethical AI can only be achieved by combining elements of planning in machine readable format, categorization or classification of areas of activity, using library coding systems and commercial classification codes for economic activities, and classification systems in specific fields (e.g. like exist in mathematics, computer science and some physical sciences) to narrow down in which area of activity AI will be used.

Can you justify that or is it just a leap of faith?  I would appreciate intellectual rigour to ensure arguments are well grounded.

For instance, you need to explain what you mean by the terms you’ve used, e.g. what does it take for AI to be open?  Perhaps you have some ideas for the kind of terms and conditions required for others to reuse an AI model?

What criteria make something “explainable”?   Good explanations depend on the audience, and a technical explanation is unsatisfactory for non-technical people.

Similar questions applied to “inclusive" and to “ethical" in respect to AI.

It is also big jump to requiring planning and library coding systems.

You may want to take a look at the EU regulatory framework for AI, see: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai which is intended guarantee the safety and fundamental rights of people and businesses when it comes to AI. And, they will strengthen uptake, investment and innovation in AI across the EU. As AI is a fast evolving technology, the proposal has a future-proof approach, allowing rules to adapt to technological change.

More rigour, less leaps of faith ...

Dave Raggett <dsr@w3.org<mailto:dsr@w3.org>>

Received on Monday, 21 November 2022 03:27:56 UTC