W3C home > Mailing lists > Public > public-rdf-wg@w3.org > February 2011

Re: [JSON] Re: Getting started

From: Ivan Herman <ivan@w3.org>
Date: Thu, 24 Feb 2011 10:10:38 +0100
Message-Id: <58DFF512-37F6-472A-AC62-F6DF6BBF26DE@w3.org>
To: public-rdf-wg <public-rdf-wg@w3.org>

I tried to give my answer without looking at the other mails so far, not to be influenced... which means that my opinion may change in future if I _am_ influenced:-) I picked my questions from 


An intro issue. I wonder what the 'programming model' for a JSON serialization is. I can look at it two ways:

a: obj = json.load(filename)
   obj is a data structure that I can make use of right away

b: obj = rdf.parse(filename,json)
   obj is, essentially, a graph that can be used in a triple store

These two views are different. (b) is really just a serialization format of RDF, in par with Turtle; (a) is a JSON data format that might be converted into RDF triples if needed. Ideally, and if possible at all, I would aim for (a). The model might be something like microformats: they can be used by themeselves for people who do not care about RDF, but they can be transformed into RDF, too. (The comparison is of course bad, because microformats never thought of RDF, so a transformation is an afterthought, but that might be where we could be wiser...). But this goal might be completely unachievable, in which case I stand corrected...

With that in mind, here are my answers

1. Are we to create a lightweight JSON based RDF interchange format optimized for machines and speed, or an easy to work with JSON view of RDF optimized for humans (developers)?

I am not sure whether the whole argument around speed is relevant. As far as know, JSON _is_ simple whatever we do, and almost all environments have built-in (and, I would expect, optimized) parsers. Mainly compared with other formats like Turtle, I feel that 'speed' issue is irrelevant. Also: I expect RDF/JSON be used for Web Application programmers with programs running in a client, ie, they will not download and manage RDF graphs with millions of triples anyway. In my view, the problem we have is that developers do not use, understand, or want to understand RDF, and JSON provides an easy(er) way to fill this gap.

2. Is it necessary for developers to know RDF in order to use the simplest form of the RDF-in-JSON serialization?


3. Should we attempt to support more than just RDF? Key-value pairs as well? Literals as subjects?

Basically, no. If we go down that route, we will loose a huge, nay a HUGE amount of time arguing on what those things are. Time is the essence here, we should provide something to the WebApp and JSON community very quickly, otherwise the train will be gone.

4. Must RDF in JSON be 100% compatible with the JSON spec? Or must it only be able to be read by a JavaScript library and thus be JSON-like-but-not-compatible (and can thus deviate from the standard JSON spec)?

I am not really familiar with JSON to answer this. My goal is that this should work out of the box with _all_ environments that can handle JSON. If that means compatibility with the JSON spec, be it; if that means there is an industry standard that goes beyond the JSON spec, be it.

5. Must all major RDF concepts be expressible via the RDF in JSON syntax?

Hm. My inclination is yes, if we can do it in a way that the complicated things can be hidden for people who do not need it. Ie, if we have a _:a like notation for a blank node, that is fine, but only people who know what that beast is will use it. Note that if this WG deprecates some features, I am happy to ignore those for JSON

6. Should we go more for human-readability, or terse/compact/machine-friendly formats? What is the correct balance?

Isn't this a little bit similar to question #1? My inclination is for human-readability, because I do not think speed is really the issue here. 

7. Should there be a migration story for the JSON that is already used heavily on the Web? For example, in REST-based services?

I am not sure I understand what this means.

8. Should processing be a single-pass or multi-pass process? Should we support SAX-like streaming? 

See my intro. If we look at the data and forget about RDF, it should be single pass I guess. Maybe needs a second pass if it is converted into RDF.

9. Should there be support for disjoint graphs?

I must admit I do not know. We also do not know yet what we mean by graphs:-(

10. Should we consider how the structure may be digitally signed?

Not that I want to minimize the importance of signature, I think that should be left to folks working on JSON.

11. How should normalization occur?

I do not know...

12. Should graph literals be supported?

What are graph literals? :-)

Seriously: I do not believe this can be decided independently of the graph task force.

13. Should named graphs be supported?

See question #12

14. Should automatic typing be supported?

Yes. B.t.w., Turtle has that already:-) although we may want to extend those.

15. Should type coercion be supported?


16. Should there be an API defined in order to easily map RDF-in-JSON to/from language-native formats?

No. We define a serialization format. If we look at the raw JSON data, each language has native formats on how that data can be looked at; we should not interfere with that. If we look at RDF, then there has to be some sort of an RDF environment anyway that understands that particular serialization format, and we should interfere with the way they do that.

Thanks Manu for the questions!


Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF: http://www.ivan-herman.net/foaf.rdf

Received on Thursday, 24 February 2011 09:09:27 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:04:02 UTC