W3C home > Mailing lists > Public > public-rdf-comments@w3.org > April 2013

RE: RDF/JSON

From: Markus Lanthaler <markus.lanthaler@gmx.net>
Date: Sun, 28 Apr 2013 19:15:25 +0200
To: "'Peter Ansell'" <ansell.peter@gmail.com>
Cc: "'Arnaud Le Hors'" <lehors@us.ibm.com>, <public-rdf-comments@w3.org>, "'Martin Nally'" <martin.nally@gmail.com>, "'Joshua Shinavier'" <josh@fortytwo.net>
Message-ID: <001601ce4433$fa93e3a0$efbbaae0$@lanthaler@gmx.net>
On Sunday, April 28, 2013 1:43 AM, Peter Ansell wrote:

> I have found it to be an advantage in dealing with arbitrary triples
> using RDF/JSON. JSON-LD is great for annotating fixed structures if
> you have a relatively fixed JSON API and you want to access it
> directly using JSON without modifying it too much, where RDF/JSON can
> represent any triples without relying on different methods for
> slightly different use cases.

And that doesn't apply to JSON-LD in expanded flattened form? See example 8 in the API spec (http://www.w3.org/TR/json-ld-api/#flattening). Is it because the array is not indexed by subject or is there another reason?


> > Does the statement "we would rather point to a specification" mean
> > that you wanna propose to put RDF/JSON on the REC track?
> 
> Before RDF/JSON goes too much further I would like to propose an
> addition to make it optionally a quads format. Joshua Shinavier
> extended the RDF/JSON format when it was implemented for SesameTools a
> while back to include an extra optional graph element attached to each
> object, and it has been working well for me, and as of Sesame-2.7.0
> there is a parser and writer that both use this extended graph-aware
> RDF/JSON format.
>  
> I have also developed, and am willing to contribute to W3C, a small
> initial compliance test suite for RDF/JSON in order to verify the
> Sesame implementation if/when it gets more attention from the RDF
> workgroup.

That's a different topic altogether. Let's not discuss that in this thread.


> RDF/JSON has a very small initial cost for existing RDF developers,
> and it may actually help new RDF developers understand the graph model
> underlying RDF.

I doubt that. It may help new RDF developers to understand triples but nothing beats Turtle (or N-Triples) in that regard.


> JSON-LD has a high initial cost for both existing and new RDF
> developers, although once you get to using it, and you can easily
> merge the context into a document in your head, then you may find it
> useful.
> 
> It is mainly for those reasons I doubt that RDF/JSON is going away
> anytime soon for users who want to serialise arbitrary RDF graphs to
> JSON. For those that want to make or annotate existing JSON APIs JSON-
> LD is the solution to their problems, but it isn't a very clean
> generic solution in terms of RDF.
> 
> Can all JSON-LD documents can be parsed consistently to RDF triples
> without having to retrieve a context document from the internet to
> parse along with an existing document?

Yes, all expanded documents. And you can request expanded documents using the profile parameter when doing conneg.
What does the sentence "it isn't a very clean generic solution in terms of RDF" mean? That you can't "see" the triples because they are collected in a single object?


> > None of the things you described is a fundamental problem IMO. I certainly
> > don't wanna belittle the challenges you had to deal with and also understand
> > that for certain, very specific use cases RDF/JSON is a better fit. You will
> > find a slightly better solution for almost every use case that is specific
> > enough. The important thing is that we have a format which is able to
> > address all use cases. It is also critical that it feels "native" for Web
> > developers - RDF/JSON certainly does not.
> 
> That is one usecase. I think RDF/JSON and JSON-LD can work well for
> RDF developers and traditional web developers respectively, without
> being biased to one just because it uses the same base language. I
> completely disagree with your assertion that it is important that a
> single JSON RDF serialisation needs to fit with everyones usecases, as
> it encourages complexity, which can only be a bad thing in the long
> term.

This has been discussed at length half a year ago. I still think endorsing two "competing" standards is bad for a number of reason and if just because it will confuse developers. Is it helpful for developers to have both RDFa and Microdata?

Everyone is free to expose data as he would like. RDF/JSON is one option to do so. There a many ways to define something very similar to RDF/JSON which is more efficient in a number of use cases. Nevertheless it makes sense to have as few as possible standardized formats. Interoperability will become much easier. The difficult decision whether to support both RDF/JSON and JSON-LD or just one (and which one) will go away if there's just one standard.


> One of the issues for me is that JSON-LD can be represented in so many
> ways that it is difficult to easily process any one of the
> serialisations. Is there a quick and/or standard way of identifying
> which version/profile of JSON-LD you are looking at?

Yeah, look at the profile parameter. Otherwise just transform the document into the form you desire. The algorithms are there and there already exist implementations for most major languages.


> Arguing that JSON-LD will be able to be parsed as RDF triples by a
> select number of highly developed libraries kind of defeats the point
> of the argument for it as *the* universal, easy to use, JSON
> serialisation of RDF. An RDF/JSON parser is massively simpler than a
> JSON-LD parser, and can be written in a few lines of javascript on the
> fly if necessary. For that matter, all of the other JSON RDF
> serialisations that have been proposed can be parsed in a few lines of
> javascript, or in some cases using SPARQL Results JSON parsers.

There's no parsing. All JSON-based formats are parsed as JSON - so it is exactly the same. Are you talking about transforming or querying the data?


Cheers,
Markus



--
Markus Lanthaler
@markuslanthaler
Received on Sunday, 28 April 2013 17:15:59 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:59:32 UTC