Attempting (vainly?) to change the subject was: Blank nodes must DIE! [ was Re: Blank nodes semantics - existential variables?]

> On 28 Jul 2020, at 11:37, thomas lörtsch <tl@rat.io> wrote:
> 
> 
> 
>> On 28. Jul 2020, at 00:40, Antoine Zimmermann <antoine.zimmermann@emse.fr> wrote:
>> 
>> Le 27/07/2020 à 23:54, thomas lörtsch a écrit :
>>>> On 27. Jul 2020, at 20:56, Antoine Zimmermann <antoine.zimmermann@emse.fr> wrote:
>>>> 
>>>> Le 27/07/2020 à 18:52, Maxime Lefrançois a écrit :
>>>>> If we imagine datatypes that encode RDF graphs,
>>>> 
>>>> Ivan Herman drafted a document a while ago that does exactly that:
>>>> 
>>>> https://www.w3.org/2009/07/NamedGraph.html#definition-of-graph-literals
>>>> 
>>>> 
>>>> I even think that, in some cases, it could be of some usefulness, but the kinds of use cases are so niche, and the idea of encoding RDF graphs inside literals in other RDF graphs is so disturbing to the homo semanticus that there are chances it will never get traction.
>>> For graphs that contain only one triple it’s really not very different from what RDF* does, isn’t it?
>> 
>> I don't pretend to have an in-depth knowledge of RDF*, but I've read the papers specifying RDF* with sufficient attention to say that it is not the case.
>> 
>> The following triple (using Ivan's specification of graph literals):
>> 
>> <s> <p> "<subject> <predicate> <object>"^^rdfl:GraphLiteral .
>> 
>> has one RDF triple. It conforms to the RDF standards.
>> 
>> In RDF*, this:
>> 
>> <s> <p> << <subject> <predicate> <object> >> .
>> 
>> is not an RDF triple. From one of the papers about RDF*, the previous "triple" could be understood as syntactic sugar for a reified triple, like so:
>> 
>> <s> <p> [
>> rdf:subject <subject>;
>> rdf:predicate <predicate>;
>> rdf:object <object>
>> ]
>> 
>> but another paper says it could be interpreted differently. In any case, the power of RDF* is probably in its accompanying query language SPARQL*, where you can ask:
>> 
>> SELECT ?x WHERE {
>> <s> <p> << <subject> <predicate> ?x >> .
>> }
>> 
>> You can't do this with a literal, unless you use regular expressions and filters.
> 
>> In any case, RDF* is a different data model, while graph literal is just a way of using the RDF data model to include graphs as values in the domain of discourse.
> 
> I agree with most of what you say, but if you squint a little what you see is that both approaches repeat the whole, long triple, with thin wrappers around it. That's the similarity I refered to. I don’t know about the Homo Semanticus in general but what shocked me about RDF* in the first place was this verbosity of citing the whole triple verbatim. But a lot of people seem not to bother and so I thought: if the sheer length of the node is not an issue, then why not reuse datatypes.
> 
> Meta modelling introduces a break in the space of discourse and so far I haven’t seen an approach that can implement it in RDF without some break in the RDF space either. To me the question is rather: which break makes the most sense. If like Henry argues citing is the right way to meta model in RDF then implememtation details - if not quite unsurmountable - would rather be a minor concern to me. I.e. like you can process an rdf:XMLLiteral with genuine XML machinery you could reuse genuine RDF machinery to process an rdf:Turtle literal. 

I think the work has been done: we call these named graphs. True they have
not yet been given a formal semantics. Perhaps Category Theorists could
give us the formal basis of this well-know phenomenon by showing how these
concepts map to all the prefered ones from the various intellectual
traditions that came to consensus on the standards here. In the philosophy
of language and in philosophical logic, it is known as the opaqueness of
belief contexts or  intensionality. (It looks like Monads could be what
is needed).  I remember learning about referential transparency and
opaqueness in my 2nd-year undergraduate, Philosophy courses at Kings
College, London in the late 1980ies.  An example often used was that one
can not infer from

LauraLane believes { Superman a FlyingBeing }

that

LauraLane believes { ClarkKent a FlyingBeing }

even though the person writing that statement has asserted in the DB that

ClarkKent = Superman

We can deduce what others believe or should believe, only by taking
statements/graphs of what they believe and merging them with other things
they believe plus the rules of logic. (This is idealized as some people
may be bad reasoners, hence the *should*).

One can do this as David Lewis, Kripke, Hintikka and others did, by reaching
for possible worlds. Some like this metaphysical approach  (it helped me
a lot). But one can just as well do it inferentially and pragmatically;
an approach that would be more appropriate for the Semantic Web community.

Here for deep reading, one can turn to Prof Robert Brandom’s  Analytic
Pragmatism who builds his whole philosophy of language on this aspect of
”saying that”. The philosophical starting point is similar to Quine’s:
that the only way we have to get a grasp of meaning is to start from what
others say and do. (And saying is a form of doing). Brandom adds that
essential to this is also the game of giving and asking for reasons, which
builds on being able to infer from what someone says what the consequences
are, and being able to hold them to account. This game is built on the
ability to keep track of who said what, when; and also what information
they retracted.

On the Web this ”saying that” needs to be thought in terms  of publishing
documents (at URIs). Those who publish become thereby responsible for what
they publish (in the sense that we should be able to point out errors, and
hold them to account for not fixing them). We keep track of who said what
by placing our data in a quad store. This allows us to later work out what
to fix if we find a problem: who to notify of an error, who to blame, who
to be wary of, etc…

Essentially this all works without making changes to the basic RDF reasoning
since it just tells us when to merge two graphs and which graphs are
consequences of which others. We just need to add the ability to distinguish
when we are merging graphs in order to model what others should believe
and when we are merging graphs of what *we* believe (or the software agent
doing this for us). But the reasoning is the same in both cases. And it
has to be, because  others wanting to predict how we will act, what we
will say, or what we should be held accountable for, will want to use the
exact same logic.

On the Web everyone can say everything, so we MUST be able to do this game
of quotation and disquotation. The architecture of the Web and the project
of the Semantic Web impose this. And literal graphs (mapped for ease of
use to named graphs, SPARQL GRAPHS or N3 graphs) give us the basics: a way
to assert what others have asserted without that statement contaminating
our knowledge base. This is essential for being able to build Guards that
can decide when to give someone access to a resource: they cannot just
take what the agent wanting access tells them at face value.

It looks like data types are useful for many other reasons too, as we saw
for units.  For an extensive literature review see my 2nd-year report on
this topic of how linked data, pragmatics, monads, security come together.


http://co-operating.systems/2019/04/01/PhD_second_year_report.pdf

> 
> But I have to admit that I might not take literals serious enough. Maybe it’s a not good, very bad idea to bend them that much. 
> 
> :TL
> 
> 
> 
>> --AZ
>> 
>>> TL_
>>>> 
>>>> —AZ
>>>> 
>> 
> 
> 

Henry Story

https://co-operating.systems
WhatsApp, Signal, Tel: +33 6 38 32 69 84‬ 
Twitter: @bblfish

Received on Tuesday, 28 July 2020 10:53:44 UTC