W3C home > Mailing lists > Public > public-rdf-wg@w3.org > February 2011

Re: [JSON] Initial comments

From: Andy Seaborne <andy.seaborne@epimorphics.com>
Date: Sat, 26 Feb 2011 19:30:36 +0000
Message-ID: <4D6954DC.6020203@epimorphics.com>
To: nathan@webr3.org
CC: Pierre-Antoine Champin <pierre-antoine.champin@liris.cnrs.fr>, Thomas Steiner <tomac@google.com>, RDF WG <public-rdf-wg@w3.org>

On 24/02/11 23:21, Nathan wrote:
> Pierre-Antoine Champin wrote:
>> My guess is that the human-oriented JSON syntax should aim at making
>> it easy to produce.
>> To make RDF easy to consume, we don't need syntaxes (we already have a
>> dead-simple syntax: N-Triples), what we need is APIs.
> That could be where there is a difference in opinions, many would like
> developers to be able to consume and work with data without an API (as
> simple key/value objects) whilst still providing a set of goggles
> through which they can view that data as RDF (and then work with it
> through an API). Others I have spoken to would like to see RDF in JSON
> that is easy to work with without an API, and yet others would like to
> see a machine optimized RDF in JSON which they can work with via an API.
> Does anybody actually want to write RDF, by hand, in JSON? Up till now
> I'd always seen JSON as something produced by machines (by some data
> providing process, or by JSON.stringify'ing some object structure) and
> something which people just JSON.parse'd back in to an object structure
> to work with that data as simple object/array structure; where the most
> important aspect for all was always simplicity of the data structure.
> Working up the levels, the distinction I'm seeing here is that the
> current RDF publishing culture is to take data from an RDBMS or from
> Class instances, map that to RDF, and then publish/expose the RDF (this
> would require RDF in JSON) whereas most uses of JSON in the wild is
> about taking the data from a row in an RDBMS or a Class instance and
> simply dumping that existing structure out in JSON and
> publishing/exposing that, to tie in with this way of working would
> require providing a way to view that data as RDF, rather than publishing
> that data as RDF.
> So, do we focus on giving people a way to view simple objects as RDF, or
> focus on trying to get them to forget simple objects and work with RDF
> via APIs, or try and provide RDF in such a way that you don't always
> need APIs and can work with it as if it's objects?
> My general contention is that only the first two of those options will
> lead to any measurable success / adoption, and I'm reading that you're
> suggesting the third option (?)

(addressing several emails ...)

Both have their place.  As a basic exchange of a resource 
representation, an "application/rdf+json" for JSON-based graph exchange, 
with the expectation that there is an API over it (pace RDF Web Apps 
WG), and applications working also directly on a structure which is more 
JSON-friendly.  Here easier "to use" might imply some lost of fidelity, 
like IRIs as strings depending on need and that loss of fidelity might 
be different in different situations (e.g. lang tags).

There is also work on presenting information at a higher, application 
specific level in the Linked Data API [1].

So for me there are two distinct cases: exchange, not app writer 
friendly; and presentation, getting stuff out of RDF.

But the app also needs to put data back such as being able to PUT or 
POST an RDF graph to a (SPARQL) store [2].  Such a store might be 
serving not just JSON-powered webapps.  Being able to send and received 
"application/rdf+json", a format that is lossless RDF has a role here. 
Expecting the server to lift the presentation/maybe lossy format up to 
full RDF does not seem like a good idea, for example, if lang tags and 
datatypes were removed on the way out, then how shoudl they be put back?

An alternative is use Turtle for "application/rdf+json".  I'm not sure 
that is the right choice - a simpler "N-triples" for JSON so data can be 
written to the web.  Bashing strings to send N-Triples seems to go 
against the easy of use for JS apps.

(I'm not sure about the argument of non-issue of performance around JS 
Turtle parsing vs native JSON parsing -- is there any experience here? 
Also, playing down performance issues right from the start seems a bit 


[1] http://code.google.com/p/linked-data-api/

[2] SPARQL 1.1 RDF Dataset HTTP Protocol
(with the old name)
Received on Saturday, 26 February 2011 19:31:23 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:04:02 UTC