- From: Henry Story <henry.story@bblfish.net>
- Date: Wed, 29 Jul 2020 22:30:22 +0200
- To: Patrick Hayes <phayes@ihmc.us>
- Cc: thomas lörtsch <tl@rat.io>, Antoine Zimmermann <antoine.zimmermann@emse.fr>, Maxime Lefrançois <maxime.lefrancois@emse.fr>, "Shaw, Ryan" <ryanshaw@unc.edu>, Hugh Glaser <hugh@glasers.org>, Semantic Web <semantic-web@w3.org>
> On 29 Jul 2020, at 18:52, Patrick J Hayes <phayes@ihmc.us> wrote: > > > >> On Jul 28, 2020, at 5:53 AM, Henry Story <henry.story@bblfish.net> wrote: >> >> >> >>> On 28 Jul 2020, at 11:37, thomas lörtsch <tl@rat.io> wrote: >>> >>> >>> >>>> On 28. Jul 2020, at 00:40, Antoine Zimmermann <antoine.zimmermann@emse.fr> wrote: >>>> >>>> Le 27/07/2020 à 23:54, thomas lörtsch a écrit : >>>>>> On 27. Jul 2020, at 20:56, Antoine Zimmermann <antoine.zimmermann@emse.fr> wrote: >>>>>> >>>>>> Le 27/07/2020 à 18:52, Maxime Lefrançois a écrit : >>>>>>> If we imagine datatypes that encode RDF graphs, >>>>>> >>>>>> Ivan Herman drafted a document a while ago that does exactly that: >>>>>> >>>>>> https://www.w3.org/2009/07/NamedGraph.html#definition-of-graph-literals >>>>>> >>>>>> >>>>>> I even think that, in some cases, it could be of some usefulness, but the kinds of use cases are so niche, and the idea of encoding RDF graphs inside literals in other RDF graphs is so disturbing to the homo semanticus that there are chances it will never get traction. >>>>> For graphs that contain only one triple it’s really not very different from what RDF* does, isn’t it? >>>> >>>> I don't pretend to have an in-depth knowledge of RDF*, but I've read the papers specifying RDF* with sufficient attention to say that it is not the case. >>>> >>>> The following triple (using Ivan's specification of graph literals): >>>> >>>> <s> <p> "<subject> <predicate> <object>"^^rdfl:GraphLiteral . >>>> >>>> has one RDF triple. It conforms to the RDF standards. >>>> >>>> In RDF*, this: >>>> >>>> <s> <p> << <subject> <predicate> <object> >> . >>>> >>>> is not an RDF triple. From one of the papers about RDF*, the previous "triple" could be understood as syntactic sugar for a reified triple, like so: >>>> >>>> <s> <p> [ >>>> rdf:subject <subject>; >>>> rdf:predicate <predicate>; >>>> rdf:object <object> >>>> ] >>>> >>>> but another paper says it could be interpreted differently. In any case, the power of RDF* is probably in its accompanying query language SPARQL*, where you can ask: >>>> >>>> SELECT ?x WHERE { >>>> <s> <p> << <subject> <predicate> ?x >> . >>>> } >>>> >>>> You can't do this with a literal, unless you use regular expressions and filters. >>> >>>> In any case, RDF* is a different data model, while graph literal is just a way of using the RDF data model to include graphs as values in the domain of discourse. >>> >>> I agree with most of what you say, but if you squint a little what you see is that both approaches repeat the whole, long triple, with thin wrappers around it. That's the similarity I refered to. I don’t know about the Homo Semanticus in general but what shocked me about RDF* in the first place was this verbosity of citing the whole triple verbatim. But a lot of people seem not to bother and so I thought: if the sheer length of the node is not an issue, then why not reuse datatypes. >>> >>> Meta modelling introduces a break in the space of discourse and so far I haven’t seen an approach that can implement it in RDF without some break in the RDF space either. To me the question is rather: which break makes the most sense. If like Henry argues citing is the right way to meta model in RDF then implememtation details - if not quite unsurmountable - would rather be a minor concern to me. I.e. like you can process an rdf:XMLLiteral with genuine XML machinery you could reuse genuine RDF machinery to process an rdf:Turtle literal. >> >> I think the work has been done: we call these named graphs. True they have >> not yet been given a formal semantics. > > Ahem. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3199260 I meant: agreed upon semantics that made their way to the standard. But it may be that none more is required than graph literals: when two graphs are the same when they are isomorphic. ( Mhh I thought I had referenced it in my second year report. I write that up too quickly to get everything in. I should in the final thesis. I do reference your Context Mereology and IKRIS.) > >> Perhaps Category Theorists could >> give us the formal basis of this well-know phenomenon by showing how these >> concepts map to all the prefered ones from the various intellectual >> traditions that came to consensus on the standards here. In the philosophy >> of language and in philosophical logic, it is known as the opaqueness of >> belief contexts or intensionality. > > Named graphs are not opaque, and should not be given an opaque semantics, because ‘cool URIs’ should have the same meaning everywhere. The superman-morning star examples should not arise in a Web context. There are different levels of opaqueness. The opaqueness I was thinking of here is opaqueness to an OWL reasoner. Such a reasoner would not know to treat named graphs differently to other literals such as String literals or RDF/XML literals. The OWL reasoner only needs to be able to know when two of those literals are equal. It is true that equality on strings is a lot simpler to calculate than equality on XML literals, which itself is easier to calculate than on graph literals. In the case of a graphs this requires a calculation of graph isomorphism. But the owl reasoner does not have the tools to consider the consequences of what is said within the graph. A graph literal is an object like any other to the owl reasoner. So I also agree that cool uris should have the same meaning everywhere. That is the default language game we play on the web. This is actually interesting. Because it may help explain Robert Brandoms argument against representationalism and for inferential pragmatics. I think your argument in context of Superman and Morning Star examples is the referential one. If a name refers to something it must necessarily refer to that thing. ("Naming and Necessity" - a delight to read). But when we are modeling what others believe we are thinking of the inferences they can make from the information the have. And so in the example of Laura Lane below, she does not know that Clark Kent is Superman and so she cannot come to the conclusion that the journalist Kent can fly. So we need to look at what people do with graphs and URIs, without loosing sight of inferential consequences. That is where a certain opacity comes in. We don’t want their context to merge with ours. One interesting argument from functional programming is that one can use monads for such a context. Functional programmers also want referential transparency. But they need to also deal with context: Futures, Promises, IO, etc... Using the Curry-Howard Isomorphism Martin Abadi maps the logic of saying_that to Monads in this paper: https://www.sciencedirect.com/science/article/pii/S1571066107000746 > >> (It looks like Monads could be what >> is needed). I remember learning about referential transparency and >> opaqueness in my 2nd-year undergraduate, Philosophy courses at Kings >> College, London in the late 1980ies. An example often used was that one >> can not infer from >> >> LauraLane believes { Superman a FlyingBeing } >> >> that >> >> LauraLane believes { ClarkKent a FlyingBeing } >> >> even though the person writing that statement has asserted in the DB that >> >> ClarkKent = Superman >> >> We can deduce what others believe or should believe, only by taking >> statements/graphs of what they believe and merging them with other things >> they believe plus the rules of logic. (This is idealized as some people >> may be bad reasoners, hence the *should*). >> >> One can do this as David Lewis, Kripke, Hintikka and others did, by reaching >> for possible worlds. Some like this metaphysical approach (it helped me >> a lot). But one can just as well do it inferentially and pragmatically; >> an approach that would be more appropriate for the Semantic Web community. >> >> Here for deep reading, one can turn to Prof Robert Brandom’s Analytic >> Pragmatism who builds his whole philosophy of language on this aspect of >> ”saying that”. The philosophical starting point is similar to Quine’s: >> that the only way we have to get a grasp of meaning is to start from what >> others say and do. (And saying is a form of doing). Brandom adds that >> essential to this is also the game of giving and asking for reasons, which >> builds on being able to infer from what someone says what the consequences >> are, and being able to hold them to account. This game is built on the >> ability to keep track of who said what, when; and also what information >> they retracted. >> >> On the Web this ”saying that” needs to be thought in terms of publishing >> documents (at URIs). Those who publish become thereby responsible for what >> they publish (in the sense that we should be able to point out errors, and >> hold them to account for not fixing them). > > Yes, but not all publications need be assertions. What is needed is a system to allow (traceable, secure) warrants of a publication. For details see > > https://www.researchgate.net/publication/234804495_Named_Graphs_Provenance_and_Trust > > the semantics of this, interestingly, require the notion of a speech act. Austin meets the semantic web, a high point in my semantic career :-) Ah I had missed that citation in there. Yes, speech acts is key to understanding linked data. One can see HTTP GET, PUT, POST, DELETE as document acts. > It still doesn’t need opacity, though. > > BTW, on opacity, in the IKL project we avoided opacity in the basic logic by introducing explicit contexts for names – basically, subscripting names with the name of who used that name. Superman comes out looking like this: > > LL believes ( ‘Superman'\LL a flyer ) > LL believes ( ‘KKent'\LL not a flyer ) > KKent=Superman > > with no problem of reference, and we can usefully add > > KKent =/= ‘Superman'\LL > > to just say that KKent isn’t who Louis thinks he is, even though Superman=‘Superman'\LL . > > See https://www.slideshare.net/PatHayes/ikl-presentation-for-ontolog slide 15 et seq for details and more complicated examples. > > This seems a little odd until one gets used to it, but it is WAY easier to say complicated things about name usage than the conventional opaque-name convention. I challenge anyone to try doing the Lacrosse example in a conventional modal logic of belief. > > This idea of ‘contextual name’ maps naturally into the RDF world as a kind of typed literal, but the ’type’ is a context name rather than a datatype. (Note, in RDF1.1, datatype names can be any URI, they don’t have to be ‘recognized’.) For named graphs it would be the name of the graph. So if a graph named ex:graph uses a URI ex:thing in some way that might be different from the norm, then reporting this seen from outside, as it were, one should rewrite ex:this as ‘ex:thing’@ex:graph, which refers (everywhere) to whatever ex:graph was using ex:this to refer to. Then if you later discover that the graph was using it properly, you can just say > > ex:thing owl:sameAs ‘ex:thing’@ex:graph . I think Robert Brandom has an argument that the logic of something seeming to be so and so (eg. to have a color) is dependent on the logic of its being so an so (having a color). One first has to learn the language games of colors before one can learn that of its seeming to have one. I think something like this may be true with your IKL proposal. If we start off with distinguishing each name in each context then it becomes difficult to establish meaning. So I think the reverse route is better: start with the game of URIs having the same meaning, and in special circumstances SPARQL transforms may be needed before understanding what an RDF Graph is getting at. > > Pat > > >> We keep track of who said what >> by placing our data in a quad store. This allows us to later work out what >> to fix if we find a problem: who to notify of an error, who to blame, who >> to be wary of, etc… >> >> Essentially this all works without making changes to the basic RDF reasoning >> since it just tells us when to merge two graphs and which graphs are >> consequences of which others. We just need to add the ability to distinguish >> when we are merging graphs in order to model what others should believe >> and when we are merging graphs of what *we* believe (or the software agent >> doing this for us). But the reasoning is the same in both cases. And it >> has to be, because others wanting to predict how we will act, what we >> will say, or what we should be held accountable for, will want to use the >> exact same logic. >> >> On the Web everyone can say everything, so we MUST be able to do this game >> of quotation and disquotation. The architecture of the Web and the project >> of the Semantic Web impose this. And literal graphs (mapped for ease of >> use to named graphs, SPARQL GRAPHS or N3 graphs) give us the basics: a way >> to assert what others have asserted without that statement contaminating >> our knowledge base. This is essential for being able to build Guards that >> can decide when to give someone access to a resource: they cannot just >> take what the agent wanting access tells them at face value. >> >> It looks like data types are useful for many other reasons too, as we saw >> for units. For an extensive literature review see my 2nd-year report on >> this topic of how linked data, pragmatics, monads, security come together. >> >> >> http://co-operating.systems/2019/04/01/PhD_second_year_report.pdf >> >>> >>> But I have to admit that I might not take literals serious enough. Maybe it’s a not good, very bad idea to bend them that much. >>> >>> :TL >>> >>> >>> >>>> --AZ >>>> >>>>> TL_ >>>>>> >>>>>> —AZ >>>>>> >>>> >>> >>> >> >> Henry Story >> >> https://co-operating.systems >> WhatsApp, Signal, Tel: +33 6 38 32 69 84 >> Twitter: @bblfish Henry Story https://co-operating.systems WhatsApp, Signal, Tel: +33 6 38 32 69 84 Twitter: @bblfish
Received on Wednesday, 29 July 2020 20:30:41 UTC