RE: Moral Reasoning Systems

Moral / ethical degrees of 'correctness' could be inferred from aggregated
behavior (kinds) contexts which in turn may be aggregated into 'purpose'
contexts shall something be regarded as 'correct' or 'incorrect'. One
should see something as right or wrong according a behavior for a given
purpose. This could be useful to avoid systems 'damaging' someone and also
to implement systems that 'discover' the 'right' methods or services for
'moral' (correct) completion of a given task.

RDF and ontologies can only tag URIs for behavior and purpose information
metadata / vocabularies. This is another approach (see attachment) which
uses metamodels for behavior inference and not only tagging. Regards,

Sebastián.

On Nov 28, 2016 12:49 PM, "John Flynn" <jflynn12@verizon.net> wrote:

> Automating moral/ethical behavior by machines/software is a very
> complicated issue, just as it is for humans to define. However, the best
> fundamental rule I have seen is to take no action that would intentionally
> harm another. Generally, the term "another" applies to another human, but
> should it also apply to animals? If so, which animals - how about cows?
> Should it apply to other software programs? What if the person is really
> evil? What if the action harms one, or a few people, but is done for the
> good of a larger number of people. Semantic modeling (ontologies) can
> formally (logically) represent domains of interest. Generally, the narrower
> the scope of the domain - the easier it is to model. Semantic modeling of
> something as complex as morals/ethics is extremely challenging. An
> interesting challenge would be to create a straw man morals/ethics ontology
> and make it available for review and comment so that it might be refined
> over time.
>
>
>
> John Flynn
>
> http://semanticsimulations.com
>
>
>
> *From:* paoladimaio10@gmail.com [mailto:paoladimaio10@gmail.com] *On
> Behalf Of *Paola Di Maio
> *Sent:* Monday, November 28, 2016 4:25 AM
> *To:* Adam Sobieski
> *Cc:* semantic-web@w3.org
> *Subject:* Re: Moral Reasoning Systems
>
>
>
> Hay Adam
>
>
>
> Thanks a lot for this note. It tackles an important topic, which I have
> been working on for some time. mostly trying to figure out how to tell
> machine to be good. How to do that .... an ontology and a bunch of rules
> should do but...
>
> Humanity has not yet been able to set a good example for machine.
>
> on the other hand machine can be simpler to programme than humanity.
>
>
>
> But let me start by 'objecting' to the choice of term 'moral'. I use the
> term 'ethical' and inclined to think that it is far wiser choice.
>
> Simple argument made here:
>
> https://docs.google.com/presentation/d/1UylwnWzYWfITyTsNUctELncVxatYU
> Kx74RjwtWgpyP4/edit?usp=sharing
>
>
>
> Thoughts?
>
>
>
> Secondly, I d very much like to see address the relevance to the semantic
> web (and web in general) and some suggestion of how to work on this
> important topic in the most pervasive way
>
> How to advance this topic sensibly and ostensibly
>
>
>
> Chirps
>
>
>
> PDM
>
>
>

Received on Monday, 28 November 2016 16:41:01 UTC