W3C home > Mailing lists > Public > public-linked-json@w3.org > June 2011

RE: Yet another serialization format?

From: Markus Lanthaler <markus.lanthaler@gmx.net>
Date: Tue, 28 Jun 2011 14:52:58 +0800
To: <public-linked-json@w3.org>
Message-ID: <00a301cc3560$06e64ab0$14b2e010$@lanthaler@gmx.net>
On 06/28/2011 2011 11:59 AM Manu Sporny wrote:
> On 06/27/2011 10:28 AM, Markus Lanthaler wrote:
> > As it appears to me, currently the goal is to create "yet another
> > RDF serialization format".
> 
> No, that is not the goal.

OK, good :-) I think there's a great chance for wide acceptance if we try to leave out RDF as much as possible. Don't get me wrong, I still believe there should be an easy way to transform JSON-LD to RDF but most developers (apart from JSON-LD tool developers) don't need to know or worry about that.

The other thing I would suggest is to remove the references to XSD data types from the current spec. These are, first of all, not relevant for developers using JSON and introduce some problems which make the specification just more complex as it needs to be. Just think of the xsd:decimal and xsd:double issue for example. Another issue is that XSD is not complete by any means, e.g., there is no way to define a timestamp or geo coordinates. So why bother using it at all? How to interpret JSON's basic datatypes (string, boolean, number, ...) is at the semantic level and should thus be described by using an appropriate "semantic concept".

 
> > Is it thus really necessary to change all those representations to
> > comply to a yet-to-define specification?
> 
> No, one of the goals listed in the JSON-LD spec is:
> 
> Zero Edits, most of the time
> JSON-LD provides a mechanism that allows developers to specify context
> in a way that is out-of-band. [...]

Maybe I got a bit confused due to all those special keywords like @, @context, @iri, @coerce, etc. and the fact that most of the examples in the specification make use of them.


> > Wouldn't it be more sensible
> > to create a specification which allows to describe those existing
> > representations and to transform those to a graph of linked data?
> 
> That's what JSON-LD does for most cases.

What I really intended to propose was to completely separate the JSON representation from the Linked Data description instead of mixing out-of-band information with inline JSON-LD constructs. If you have a look at current Web APIs you will see that that's already the current best practice. There is a (currently just human-readable) documentation describing how to interpret a specific JSON representation.

We could do the same but in a machine-readable way by defining something like a schema which describes the structure of the JSON representation and maps the different elements to concepts. This schema would also describe what elements are transformed to IRIs and how this is done.

This aspect is probably also somehow related to the property-name scoping discussion raised by Glenn. It makes perfect sense in JSON to use the same attribute name (with slightly different meaning) in different objects but currently there is no way to specify how these attributes are mapped to different IRIs in JSON-LD. @context does not take into consideration the type of the JSON object.

I've worked on something similar but from a different point of view before. I've tried to describe existing Web APIs in order to create more flexible and decoupled clients. You can find a paper outlining the idea here: http://bit.ly/seredasj


> > This would lead to a clear upgrade path for existing systems without
> > breaking all of its clients. In the approach I'm talking about, the
> > semantics/links would be added as a layer on top of the current data
> > (separation of concerns).
> 
> I don't understand the difference between what you're expressing and
> what the JSON-LD spec already does. Could you please give an example?

Hope the description above makes it a bit clearer. Another example would be to take, e.g., Facebook's Open Graph API which is effectively already linked data and describe it by using the approach described above. Even if Facebook doesn't change its API at all you could convert all of its data to linked data and in consequence to RDF. How the schema is linked to the representations is another story. In case a JSON publisher does not cooperate this has to happen out-of-band. This could, e.g., be done by binding a schema to an entry URI and then annotate all links in the schema also with the target schema. If the JSON publisher cooperates, this could either be achieved by *one* special JSON attribute, a HTTP link header or a MIME type parameter.


--
Markus Lanthaler
Received on Tuesday, 28 June 2011 06:53:33 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 16:25:34 GMT