W3C home > Mailing lists > Public > public-rdf-wg@w3.org > November 2012

Re: Sloppy inference rules

From: Andy Seaborne <andy.seaborne@epimorphics.com>
Date: Wed, 07 Nov 2012 11:32:06 +0000
Message-ID: <509A46B6.9060404@epimorphics.com>
To: public-rdf-wg@w3.org

On 06/11/12 12:21, Nathan wrote:
> Steve Harris wrote:
>> On 2012-11-05, at 23:04, Pat Hayes wrote:
>>> On Nov 5, 2012, at 6:15 AM, Steve Harris wrote:
>>>> On 2012-11-01, at 09:50, Markus Lanthaler wrote:
>>>>> On Thursday, November 01, 2012 6:56 AM, Ivan Herman wrote:
>>>>>> As Antoine notes, the OWL 2 group has faced the same issue for OWL 2
>>>>>> RL. I do not see any problem doing that in this case either. I do not
>>>>>> think we should reopen, at this point, the bnode-in-predicate and
>>>>>> literal-in-subject issue and, with this, using this 'generalized
>>>>>> triples for the rules' seems to be the clean approach...
>>>>> Honestly it sounds a bit strange to me to simply accept that there
>>>>> is a
>>>>> fundamental problem without trying to address it - especially
>>>>> considering
>>>>> that the problem has been known since at least 2005 (2002?).
>>>>> The other thing that worries me even more is the fact that a number
>>>>> of RDF
>>>>> serialization formats are in the process of being standardized
>>>>> right now. At
>>>>> least JSON-LD doesn't have this artificial restriction but that was
>>>>> heavily
>>>>> criticized by the RDF WG and, as it seems at the moment, we will
>>>>> have to
>>>>> introduce it.
>>>>> I think there won't be a better point in time to fix this once for
>>>>> all.
>>>> It is a matter of opinion that there is anything broken, to "fix".
>>> True. Let me try to explain why this current situation seems
>>> brain-damaged to anyone with logical training. A well-built logic
>>> does more than allow you to state facts: it supports inference rules
>>> (or sometimes, inference machinery of a different kind, but anyway
>>> inference machinery) which allows you to derive facts from other
>>> facts. Rules typically interact and support one another, by the
>>> outputs (conclusions) of some rules being usable as inputs to other
>>> rules, so that chains of reasoning can be supported, sometimes quite
>>> complicated chains of reasoning. Ideally, the rules should exactly
>>> "capture" the logic's own semantic notion of entailment, so that some
>>> sentences entail another just when it can be derived from them by
>>> applying the rules.
>>> RDF syntax, however, doesn't let you do this. There are RDF graphs
>>> which entail others, but the obvious rule derivation is blocked
>>> because the 'intermediate' sentences needed to make the rules connect
>>> properly are deemed illegal, even though they actually make semantic
>>> sense and indeed would be true under the logic's own semantic rules,
>>> and are needed in order for the rules to work properly on the "legal"
>>> sentences. Which is brain-damaged :-)
>> Sure, I understand that point of view, though that's a nice, succinct
>> summary of it. I've built a number of inferencing engines, and you
>> butt your head against this problem every now and again.
>> The counter to that though is more of a human factors thing: if we
>> allowed literals as subjects in triples then people would use them as
>> identifiers. It's familiar from the DB world, and not obviously wrong
>> to people who don't grok "Linked Data".
>> Sometimes it's harmless, e.g.
>>    23765 a :Integer .
>> Other times it's not harmless:
>>    23765 a :Widget .
>> Other times it's even hard to demonstrate that it's a bad idea:
>>    "8d8b0e54-6b8f-43ab-aff9-26a7a12890a0" a :LogEntry .
>> It's not speculation, I've heard people complain that they can't use
>> integers to identify e.g. people, and have to stick a URI prefix on
>> the front.
>> We'd have the same issues with lexical "tags", and other things that
>> are identifiers in some defined context.
> People can already make masses of "mistakes" when using RDF, why is this
> particular "mistake" more considered than any other?
> This is a technical argument vs an opinion. People should be free to
> make their own "mistakes".
> Personally I feel the technical arguments for literals as subjects are
> very strong, the only negative technical argument that I can see has any
> weight at all, is that not all serializations could (nicely) support
> them, although they still could support them via reification.
> As for inconvenience, if literals as subjects were to be allowed, who
> would be inconvenienced, really, and guaranteed? anybody but implementers?

And applications.

The restriction has been around a long time, application code has been 
written with that restriction assumed.

APIs have been written with that restriction assumed.  Change the API, 
break the applications.

And data.

Data models have been developed with that in mind.

And applications again which break if new data enters existing applications.

We have what we have - start again or move on?


PS And no RDF/XML.  OK - not all bad.

> Best,
> Nathan
Received on Wednesday, 7 November 2012 11:32:40 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:04:23 UTC