Re: Moral Reasoning Systems

Paola Di Maio,

Thank you. I was reading about moral reasoning (https://en.wikipedia.org/wiki/Moral_reasoning) while titling the article. I could rename it, at some point, to “Ethical Reasoning Systems”, as per:
https://www.w3.org/community/argumentation/wiki/Main_Page#Ethical_Reasoning_Systems_and_Legal_Expert_Systems . As an aside, “machine ethics” (https://en.wikipedia.org/wiki/Machine_ethics) is also known as “machine morality”,  “computational ethics” and “computational morality”.

I'm hoping to indicate these research topics in contexts beyond those of robotics. The software components could be of use for virtual humans or digital actors. The software components could be of use to philosophy, law, the humanities, literature, social sciences, decision support, public policy, and to education.

We can consider that ethical reasoning software could load “configuration and data” before providing outputs for inputs or questions. We can envision systems which can simulate each of Kohlberg’s stages of moral development or simulate other stages from other models of moral development. We can envision systems which can simulate moral reasoning per multiple belief systems, schools of thought or philosophies. The topics of the variation of outputs depending on variations of configurations or loaded data (axiomatic systems, principles, beliefs, values, models of characters, self-models or role models, or generic models of cultural stereotypes) is very interesting to me. I envision systems which can compare reasoning across various configurations or data sets and can provide explanation, argumentation and reasoning as parts of system output.

Many semantic web researchers are interested in reasoners, automated reasoning and data interchange between reasoners, topics of ontology, semantics, provenance as well as rationale, justification and argumentation.​

I also think about how to work on these important topics in the most pervasive ways and how to advance these topics sensibly and ostensibly.


Best regards,
Adam Sobieski
http://www.phoster.com/contents/

https://www.w3.org/community/argumentation/

https://www.w3.org/community/collaboration/


From: Paola Di Maio<mailto:paola.dimaio@gmail.com>
Sent: ‎Monday‎, ‎November‎ ‎28‎, ‎2016 ‎4‎:‎24‎ ‎AM
To: Adam Sobieski<mailto:adamsobieski@hotmail.com>
Cc: semantic-web@w3.org<mailto:semantic-web@w3.org>

Hay Adam

Thanks a lot for this note. It tackles an important topic, which I have been working on for some time. mostly trying to figure out how to tell machine to be good. How to do that .... an ontology and a bunch of rules should do but...
Humanity has not yet been able to set a good example for machine.
on the other hand machine can be simpler to programme than humanity.

But let me start by 'objecting' to the choice of term 'moral'. I use the term 'ethical' and inclined to think that it is far wiser choice.
Simple argument made here:
https://docs.google.com/presentation/d/1UylwnWzYWfITyTsNUctELncVxatYUKx74RjwtWgpyP4/edit?usp=sharing


Thoughts?

Secondly, I d very much like to see address the relevance to the semantic web (and web in general) and some suggestion of how to work on this important topic in the most pervasive way
How to advance this topic sensibly and ostensibly

Chirps

PDM


<https://about.me/paoladimaio?promo=email_sig&utm_source=product&utm_medium=email_sig&utm_campaign=chrome_ext>

[X]

Paola Di Maio
about.me/paoladimaio





On Tue, Nov 22, 2016 at 1:03 AM, Adam Sobieski <adamsobieski@hotmail.com<mailto:adamsobieski@hotmail.com>> wrote:
W3C Semantic Web Interest Group,

I would like to broach with you the interesting topics of automated moral reasoning.

Introduction

Automated reasoning is a branch of artificial intelligence dedicated to understanding different aspects of reasoning; moral reasoning is reasoning concerned with morality. Automated moral reasoning is a research topic pertaining to the understanding of and the modeling and simulation of moral reasoning.

Moral Reasoning Systems and Education

Five varieties of moral reasoning systems with educational applications to consider are indicated.

Firstly, there is a variety of moral reasoning system with console-based or text-based user interfaces, a variety which possibly makes use of custom programming languages. This variety requires some specialized expertise to use, resembling, perhaps, computer algebra systems, automated theorem provers and proof assistants.

Secondly, there is a variety of moral reasoning system which interoperates with software applications requiring less specialized expertise to use, software where the users needn’t be computer programmers. Examples include decision support systems, software which support individual or organizational decision-making activities.

Thirdly, there is a variety of moral reasoning system with natural language and multimodal user interfaces. This variety includes dialog systems, virtual humans, intelligent personal assistants and intelligent tutoring systems. This variety can conveniently answer, discuss and advise larger numbers of users with regard to questions that they might ask, including in educational contexts.

Fourthly, there is a variety of moral reasoning system which interoperates with the processing and generation of stories, fables, parables or exemplums. This variety can be of use in processing the moral messages of literary texts and generating literary texts which teach moral messages.

Fifthly, there is a variety of moral reasoning system which interoperates with interactive digital entertainment, serious games, simulations and learning environments. This variety interoperates with virtual interactive storytellers, virtual directors, drama managers, experience managers and other educational narrative technologies.

Automated Moral Reasoning and Planning

Automated planning and scheduling is a branch of artificial intelligence concerned with the realization of strategies or action sequences. Planning algorithms are often instrumental to generating the behavior of intelligent systems and robotics.

Machine ethics, or computational ethics, is a part of the ethics of artificial intelligence concerned with the moral behavior of artificially intelligent systems. Moral reasoning components should be interoperable with planning and scheduling components.

Uses of planning are much broader than robotics. Uses of planning extend into every sector, into industry, academia, science, military and government, and into public policy. Combinations of planners and moral reasoning can provide societal benefits transcending robotics and machine ethics.

Conclusion

Moral reasoning systems can provide broad societal benefits including computer-aided moral reasoning, computer-aided authoring of literature, new tools for philosophy, law, social sciences, the digital humanities, new decision support and public policy technologies, and new tools for education.



References

Barber, Heather, and Daniel Kudenko. "Generation of Adaptive Dilemma-based Interactive Narratives." IEEE Transactions on Computational Intelligence and AI in Games 1, no. 4 (2009): 309-326.

Colyvan, Mark, Damian Cox, and Katie Steele. "Modelling the Moral Dimension of Decisions." Noûs 44, no. 3 (2010): 503-529.

French, Simon. Decision Theory: An Introduction to the Mathematics of Rationality. Halsted Press, 1986.

Goldin, Ilya M., Kevin D. Ashley, and Rosa L. Pinkus. "Introducing PETE: Computer Support for Teaching Ethics." In Proceedings of the 8th international conference on Artificial intelligence and law, pp. 94-98. ACM, 2001.

Greco, Salvatore, J. Figueira, and M. Ehrgott. "Multiple Criteria Decision Analysis." Springer's International series (2005).

Harmon, Sarah. "An Expressive Dilemma Generation Model for Players and Artificial Agents." In Twelfth Artificial Intelligence and Interactive Digital Entertainment Conference. 2016.

Hodhod, Rania. "Interactive Narrative and Intelligent Tutoring for Ill-Defined Domains." (2008).

Hodhod, Rania, and Daniel Kudenko. "Interactive Narrative and Intelligent Tutoring for Ethics Domain." Intelligent Tutoring Systems for Ill-Defined Domains: Assessment and Feedback in Ill-Defined Domains. (2008): 13.

Hodhod, Rania, Daniel Kudenko, and Paul Cairns. "AEINS: Adaptive Educational Interactive Narrative System to Teach Ethics." In AIED 2009: 14th International Conference on Artificial Intelligence in Education Workshops Proceedings, p. 79. 2009.

Hodhod, Rania, Daniel Kudenko, and Paul Cairns. "Serious Games to Teach Ethics." AISB'09: Artificial and Ambient Intelligence (2009).

Lapsley, Daniel K. Moral Psychology. Westview Press, 1996.

Mancherjee, Kevin, and Angela C. Sodan. "Can Computer Tools Support Ethical Decision Making?." ACM SIGCAS Computers and Society 34, no. 2 (2004): 1.

McLaren, Bruce M. "Extensionally Defining Principles and Cases in Ethics: An AI Model." Artificial Intelligence 150, no. 1 (2003): 145-181.

McLaren, Bruce M. "Computational Models of Ethical Reasoning: Challenges, Initial Steps, and Future Directions." IEEE intelligent systems 21, no. 4 (2006): 29-37.

Prakken, Henry, and Giovanni Sartor. "Law and Logic: A Review from an Argumentation Perspective." Artificial intelligence 227 (2015): 214-245.

Rahwan, Iyad, Simon D. Parsons, and Nicolas Maudet. Argumentation in Multi-agent Systems. Springer-Verlag Berlin Heidelberg, 2010.

Robbins, Russell W., William A. Wallace, and Bill Puka. "Supporting Ethical Problem Solving: An Exploratory Investigation." In Proceedings of the 2004 SIGMIS conference on Computer personnel research: Careers, culture, and ethics in a networked environment, pp. 134-143. ACM, 2004.

Saptawijaya, Ari, and Luís Moniz Pereira. "Towards Modeling Morality Computationally with Logic Programming." In International Symposium on Practical Aspects of Declarative Languages, pp. 104-119. Springer International Publishing, 2014.

Schrier, Karen. "EPIC: A Framework for Using Video Games in Ethics Education." Journal of Moral Education 44, no. 4 (2015): 393-424.

Sharipova, Mayya, and Gordon McCalla. "Supporting Students’ Interactions over Case Studies." In International Conference on Artificial Intelligence in Education, pp. 772-775. Springer International Publishing, 2015.

Tappan, Mark B., and Lyn Mikel Brown. "Stories Told and Lessons Learned: Toward a Narrative Approach to Moral Development and Moral Education." Harvard Educational Review 59, no. 2 (1989): 182-206.

Tappan, Mark B. "Hermeneutics and Moral Development: Interpreting Narrative Representations of Moral Experience." Developmental Review 10, no. 3 (1990): 239-265.

Tappan, Mark B., and Packer, M. (Eds.). Narrative and Storytelling: Implications for Understanding Moral Development. New Directions for Child Development, #54. San Franciso Jossey-Bass, 1991.

Vitz, Paul C. "The Use of Stories in Moral Development: New Psychological Reasons for an Old Education Method." American Psychologist 45, no. 6 (1990): 709.


https://www.w3.org/community/argumentation/2016/11/09/moral-reasoning-systems/


https://www.w3.org/community/argumentation/wiki/Main_Page#Ethical_Reasoning_Systems_and_Legal_Expert_Systems



Best regards,
Adam Sobieski
http://www.phoster.com/contents/

https://www.w3.org/community/argumentation/

https://www.w3.org/community/collaboration/

Received on Wednesday, 30 November 2016 04:51:28 UTC