Re: Questions about Reasoner Accountability

You could have a look at https://www.w3.org/DesignIssues/Logic.html and
find "proof"
or have a look at https://www.w3.org/DesignIssues/Rules.html and find "Oh
yeah?".

To make it concrete, a semantic web reasoner like Cwm
https://www.w3.org/2000/10/swap/doc/cwm
can check the proofs made by another reasoner like Eye
https://josd.github.io/eye/
For a simple example see
https://github.com/josd/eye/tree/master/reasoning/socrates or
https://github.com/josd/eye/tree/master/reasoning/socrates

Jos

PS a bit related but still in progress is
http://josd.github.io/Talks/2022/06welding/#(1)

-- https://josd.github.io


On Sat, Jul 16, 2022 at 12:01 PM Chris Yocum <cyocum@gmail.com> wrote:

> Dear Semantic Web Community,
>
> I have written on this list before about my project but I wanted to
> bring up a particular problem that I have with reasoners that will
> require some background explanation before I can describe the problem.
>
> My project encodes some of the most important genealogies of medieval
> Ireland in RDF (git repo: https://github.com/cyocum/irish-gen, blog:
> https://cyocum.github.io/).  Because I am often the only person
> working on this, I use reasoners to extrapolate the often implicit
> information in the data.  This saves me much time and I only need to
> translate exactly what is in the source material.  I have discussed
> some of the problems that I have encountered a few years ago
> (https://lists.w3.org/Archives/Public/semantic-web/2018Dec/0088.html).
> I do not want to bring that back up but if someone is interested in
> any of those problems, please feel free to email me and I would
> happily discuss some of them with you.
>
> When I discuss some of the benefits of using a reasoner to some of my
> Humanities based colleagues, one of the many things that come up is:
> how do I check that the reasoner has reasoned through this correctly?
> Essentially, this is about accountability.  "Computer says so" does not
> carry much weight.  If I cannot justify why a reasoner has made a
> certain choice when inferring predicates, some of the force of the
> system is lost.  Additionally, if I run a SPARQL query and the result
> that is returned is not what I had expected, having a "meta-query" of
> the reasoner can help me find bugs in my own data that I can track
> down and fix.  I do understand that I can always go back to the
> original source material and try to track down the error that way but
> it would something like this would make it much easier in situations
> where the material is still in manuscript form and difficult to
> decipher.  Additionally, this is a trust problem.  People who do not
> work with computers at this level do not feel that they are in control
> and this raises their defences and prompts questions of this kind.
>
> To sum up, my questions are:
>
> * Does something like this make sense?
> * Does something like this already exist and I have not noticed it?
> * Are there other ways of doing something like this without needing more
> code?
> * Is this something that is technically possible for reasoners? I assume
> so but getting expert
>   advice is usually a good idea.
> * If the first two questions are in the negative: is there anyone in the
> community working
>   on something like this that I could collaborate with? Even if it is just
> in a UAT style where
>   I run my dataset and send back any funny results.
>
> Thank you for reading and thanks in advance.
>
> All the best,
> Christopher Yocum
>

Received on Saturday, 16 July 2022 10:32:04 UTC