Re: Moral Reasoning Systems

On one side there may be a feeling of horror: surely the the essential
definition of a human being is that we are moral beings, moral reasoning
cannot be delegated to an unfeeling machine nor should the attempt be made.
On the other side is the possibility that in attempting to reason about our
moral sensibilities we can come to understand them better while, further,
the reasoning may be applied to complex fact gathering and assessment that
individual or groups of individuals are unable to manage.
I believe both points stand.
Humans are the instigators and consumers of such reasoning artefacts.
But we know that.
We don't know when moral imperative supervenes logic.

Adam Saltiel

On Mon, 28 Nov 2016 at 15:49, John Flynn <jflynn12@verizon.net> wrote:

> Automating moral/ethical behavior by machines/software is a very
> complicated issue, just as it is for humans to define. However, the best
> fundamental rule I have seen is to take no action that would intentionally
> harm another. Generally, the term "another" applies to another human, but
> should it also apply to animals? If so, which animals - how about cows?
> Should it apply to other software programs? What if the person is really
> evil? What if the action harms one, or a few people, but is done for the
> good of a larger number of people. Semantic modeling (ontologies) can
> formally (logically) represent domains of interest. Generally, the narrower
> the scope of the domain - the easier it is to model. Semantic modeling of
> something as complex as morals/ethics is extremely challenging. An
> interesting challenge would be to create a straw man morals/ethics ontology
> and make it available for review and comment so that it might be refined
> over time.
>
>
>
> John Flynn
>
> http://semanticsimulations.com
>
>
>
> *From:* paoladimaio10@gmail.com [mailto:paoladimaio10@gmail.com] *On
> Behalf Of *Paola Di Maio
> *Sent:* Monday, November 28, 2016 4:25 AM
> *To:* Adam Sobieski
> *Cc:* semantic-web@w3.org
> *Subject:* Re: Moral Reasoning Systems
>
>
>
> Hay Adam
>
>
>
> Thanks a lot for this note. It tackles an important topic, which I have
> been working on for some time. mostly trying to figure out how to tell
> machine to be good. How to do that .... an ontology and a bunch of rules
> should do but...
>
> Humanity has not yet been able to set a good example for machine.
>
> on the other hand machine can be simpler to programme than humanity.
>
>
>
> But let me start by 'objecting' to the choice of term 'moral'. I use the
> term 'ethical' and inclined to think that it is far wiser choice.
>
> Simple argument made here:
>
>
> https://docs.google.com/presentation/d/1UylwnWzYWfITyTsNUctELncVxatYUKx74RjwtWgpyP4/edit?usp=sharing
>
>
>
> Thoughts?
>
>
>
> Secondly, I d very much like to see address the relevance to the semantic
> web (and web in general) and some suggestion of how to work on this
> important topic in the most pervasive way
>
> How to advance this topic sensibly and ostensibly
>
>
>
> Chirps
>
>
>
> PDM
>
>
>

Received on Monday, 28 November 2016 16:21:18 UTC