modal logic, rdf and category theory

> On 1 Sep 2018, at 23:09, Larry Masinter <masinter@adobe.com> wrote:
> 
> Has anyone come across further developments in this space since then?
>  
> I don’t know if this is relevant, but I was looking for some treatment of trust and belief and found:
>  
> http://ceur-ws.org/Vol-1259/proceedings.pdf#page=83 <http://ceur-ws.org/Vol-1259/proceedings.pdf#page=83>
yes, nice paper, though the web site supporting their ontology is no longer up.

>  
> I was looking for a framework which at the bottom modeled an utterance of an assertion as an attempt to change the belief of the receiver, and had some notion of trust and trustworthy sources, such that trust of a source is the function you apply to someone else’s assertions before incorporating those statements into your own beliefs.
>  
> Trust no one (completely), not even your own memory.
>  
> Mistrust can come from the source being misinformed, deliberately lying, poor logic, or a misunderstanding of the meaning of terms.
>  
> In this kind of theory, everything is relative, an individual’s “knowledge” is just “belief, held firmly”, and “facts” are “beliefs, held widely, by highly trustworthy sources”.

Yes, though one has to be careful here. If you go too far in the direction of relativity then anything goes, truth
escapes you and so knowledge disappears. There is a way to both be objective about knowledge, and admit that
it is relative to your position in the space of possible worlds. That is Nozick's analysis of knowledge in 
"Philosophical Explanations" which I discuss in the paper "Epistemology in the Cloud" [1]. The advantage
is that there is a truth of the matter about what is knowledge, but it may escape you - since you may and will in many 
ways be wrong about the way the world is. On the other hand it makes debate about knowledge possible since
one can debate if a person in world W knows a proposition P, by describing that world and its neighbors...

Remember the definition of knowledge by Nozick was 

S knows P iff
  • S 
  • S believes P 
  • if S were not the case then S would not believe P

Btw, I am not sure that having such counterfactual modal analysis would be useful
in RDF for calculation purposes, but knowing what is at play will be useful in understanding 
the limits of more limited definitions of knowledge... 
It allows one to know whilst admitting that one can never attain certainty. Which is the best 
way to combat both skepticism and blind faith.

There are practical reasons especially in security where one needs to be very careful which
graphs one merges, and I am not sure I know the rules for this yet. But one can give examples.

Imagine there is an access control rule that states that only friends and their friends if they are known by two
friends  can read a particular post. 

1) Clearly it won't do to merge the access control rules and the graphs of all the friends as that 
would allow anyone to just add an access control rule to their profile giving them access to potentially
all  your resources.
2) You don't want to merge all the graphs of all your friends or else you just enable anyone to add
foaf:knows relations to their profile to make themselves look more popular

So one needs to know where the graph came from to be able to do certain types of verifications. Two 
RDF Graphs may be isomorphic but it matters where they were located on the web. Call a graph with
its location a named graph. The position of the graph on the web will make it have very different
position in the web of links to and from other graphs.

Can one speak about content, and so things that need interpretations? Yes one can have relations
like

{ <#i> :trust <https://www.w3.org/People/Berners-Lee/card <https://www.w3.org/People/Berners-Lee/card>> . }

that is a relation to a resource that changes, so to a stream of representations.

If one wanted to state one only trusted one representation from that stream one could describe that with a blank
node, state where it came from, and give its hash.

{ <#i> :trust [ hash "cafebabe";
                      source <https://www.w3.org/People/Berners-Lee/card <https://www.w3.org/People/Berners-Lee/card>> ] . }

or one could include the content directly in a quote way as in N3 or NQuads.

There is clearly a need for such an ontology of trust if only for software to keep track of what graphs
should be merged to produce a user interface for the user.

Thanks for the pointer,

 Henry

[1] https://medium.com/@bblfish/epistemology-in-the-cloud-472fad4c8282


>  
> Larry
> --
> https://LarryMasinter.net <https://larrymasinter.net/>

Received on Sunday, 2 September 2018 16:38:17 UTC