RE: Input needed from RDF group on JSON-LD skolemization

On Tuesday, July 02, 2013 7:01 PM, David Booth wrote:
[...]
> > I don't think so. It may be the author of the document who decides to
just
> > expose parts of a JSON-LD document as "RDF". I anticipate that there
will be
> > quite some APIs that will gradually transform the JSON APIs to JSON-LD
APIs.
> > Without allowing bnode-predicates this becomes considerably harder to do
as
> > the example illustrates.
> 
> Okay, I think I now see what you mean.  You are talking about a
> situation in which an author is incrementally migrating from JSON to
> JSON-LD.  If that is correct, then I can see why the author may not
> want
> to take the time to look up and specify an appropriate context URI for
> each property.
> 
> But why couldn't the author just treat the properties as relative URIs
> instead of blank nodes?  I.e., why not do something like this instead:
> 
>  >>> {
>  >>>     "@context": {
>  >>>       "@vocab": "http://example/",
>  >>>       "name": "http://xmlns.com/foaf/0.1/name",
>  >>>       ...
>  >>>     }
>  >>> }
> 
> or even something like this, making use of the base URI:
> 
>  >>> {
>  >>>     "@context": {
>  >>>       "@vocab": "",
>  >>>       "name": "http://xmlns.com/foaf/0.1/name",
>  >>>       ...
>  >>>     }
>  >>> }
> 
> Wouldn't something like that work?  I don't yet see why specifically
> blank nodes would be needed for this.

Sure, it would work.. but it would also set an expectation on consumers of
such data to be able to dereference the resulting IRIs to get the definition
of those properties. JSON-LD is all about Linked Data.

Yes, advocating bnodes in the context of Linked Data is strange but I find
it better to use identifiers which are explicitly marked as being only
locally valid if you can neither guarantee their stability nor provide
dereferenceable IRIs.

Is there a reason why you don't like bnodes-as-predicates apart from the
fact that standard RDF doesn't allow them?


> >> Any client may ignore any information it
> >> wants, but it is important that different JSON-LD standards-compliant
> >> parsers, both parsing the same JSON-LD document in an attempt to obtain
> >> the JSON-LD standards-compliant RDF interpretation of that JSON-LD
> >> document, should obtain the same set of RDF triples (except for blank
> >> node labels and possibly data type conversion).
> >
> > And that's the case right now. Every compliant JSON-LD parser is
required to
> > produce exactly the same generalized RDF dataset.
> 
> It is also good to have JSON-LD parsers produce the same *extended* RDF
> datasets if the user chooses to get extended RDF.  But the case that I
> am trying to address is the case where the user expects *standard* RDF
> -- ensuring that the mapping is deterministic with minimal information
> loss.

OK. Why do you believe a consumer expecting standard RDF isn't able to
transform the extended RDF according to his needs to standard RDF? Why do we
need to prescribe how to do this?

All it would buy us is that some implementations may not be able to called
conformant anymore (those who decide to not implement skolemization).
There's no way to enforce what consumers do with the data anyway.

The easiest way out of this would be to define some additional product
classes:
  a) an "extended RDF to standard RDF converter using skolemization"
  b) an "extended RDF to standard RDF converter discarding the extensions"

Then we could say that class a) implementations MUST transform bnodes used
in predicates to skolem IRIs.

Unfortunately, I still can't see what the advantage of doing so would be?
Why does this need to be in the JSON-LD spec?


--
Markus Lanthaler
@markuslanthaler

Received on Tuesday, 2 July 2013 17:30:56 UTC