RE: RDF/JSON

On 04/17/2013 12:21 PM, Martin Nally wrote:
> Markus,
> 
> Thanks for your careful response to my code samples trying to explain
> why RDF/JSON has been working better for us than JSON-LD. Let me first
> give some background. The applications we have been writing have
> logic-tier servers written in python (and some ruby) with user-
> interfaces written in javascript running in the browser on PCs and
> mobile devices. The JSON we are discussing is the JSON that flows
> between the python/ruby and the javascript. Both sides are RDF-aware,
> and many of our resources have multiple subjects.

What you mean here is that many representations contain multiple subjects
(which are also resources) right?


[...]
> Our JSON was originally in JSON-LD format, in the sense that the JSON
> we produced and consumed was valid JSON-LD, but we only produced and
> consumed a very restricted subset of JSON-LD. For example, we did not
> support contexts. What we used from JSON-LD was the basic organization
> that can be perhaps be summarized like this: [{'@id': S, P: O}]. I
> hope that is clear - it is an array of "dictionaries" where one
> dictionary entry is '@id' for the subject, and the other dictionary
> entries provide the predicates and objects. The O is itself an array
> of dictionaries, with the only valid keys in the dictionaries being
> '@id', '@value' and '@type'.

In other words, you just used expanded JSON-LD [1].


> We spent a few months building code this way, and it worked OK. We
> also stored this format in MongoDB and queried on it successfully. I
> cannot remember exactly what triggered the decision, but about 2
> months ago, I decided to convert over to RDF/JSON format. Since
> RDF/JSON has almost no options - in contrast with JSON-LD, which has
> many - you don't need me to explain what we did much further, but for
> completeness I will say simply that we moved to this data
> organization: {S: { P: O}}. The O in RDF/JSON is very similar to the
> JSON-LD version of O we used before - it differs only in the detail.

Which is almost the same as flattened JSON-LD [2] except that it isn't
indexed by S.
As I wrote in my previous mail it would be trivial to use data indexing to
get almost the same structure in most cases.


> My experience with the port is that the complexity of our code came
> down substantially. I would guess the code that manipulates RDF is now
> in general maybe only two thirds or three-quarters of what it had
> previously been. Even where the complexity is similar, we now have the
> advantage of using simpler language constructs (primarily dictionary
> access like a[b]), where before we used a helper method. Not all our
> helper methods are gone - for example we still have a helper method
> whose meaning is "for a given predicate and object, return all the
> subjects". It is possible that the simplification we experienced was
> particular to our data.

Is that a result of the data structure being indexed by S?

I can't really see any other substantial difference except the case when you
don't care whether a value is a literal or an IRI (as you mention further
down in your mail).


> I think you are correct in saying that if our
> representations were all single-subject, then we would not have found
> RDF/JSON to be better - the JSON-LD data organization would have been
> as good or better.

I didn't say that. I was saying that if you have a single subject as the
top-most node in your document you will be able to get almost the same
structure in JSON-LD using data indexing:

{
  "@id": "top most node",
  "property-with-an-@index-container": {
    "S1": { "@id": S1, P: O },
    ...
    "Sn": { "@id": Sn, P: O }
  }
}


> Other than the fact that our resources often have
> multiple subjects, I doubt there is anything very special in what we
> are doing. I had hoped that my code samples might help explain why our
> code got simpler, but I can see now that probably won't work.
> 
> Interestingly, RDF/JSON turned out not to be helpful at all in JSON
> databases. Databases like to query on predicates, and JSON-LD's
> approach of making the subject be the value of the '@id' predicate has
> worked much better for us, so our database format is still a very
> restricted but valid JSON-LD format. (We have also used triples stores
> - that is another conversation.)

Yeah.. *querying* also becomes simpler in JavaScript. The downside is that
you have to query unless you use data indexing.


> We have gone back and forth on
> whether we prefer JSON-LD's format for the 'O' part or RDF/JSON's
> version. We have seen advantages to both. When we put our JSON into
> Lucene, JSON-LD's version of the O format was more convenient, because
> it allowed us to easily tell Lucene to handle URLs differently from
> regular strings. When we put the data into MongoDB, we did not need
> this, and RDF/JSON's version of the O part looked better because it
> did not require query writers to remember to use '@ID' for URL-valued
> properties and '@value' for everything else - you just always use
> 'value'. This is particularly helpful if you don't know what type you
> are querying on. Right now, we are using both 'O' formats, one for
> MongoDB and the other for Lucene.

You can get around this by using a context and type-coercing the
properties.. but if you don't care it is still simple enough to provide a
simple helper method in MongoDB which just looks for both keys (there will
always just be one of them).


> In addition to the code simplification we saw when we converted to
> RDF/JSON for the API, we also saw a reduction in specification
> complexity that may ultimately be more important. Remember that our
> JSON is our API and so we need to document it. One approach is to
> document it ourselves without reference to any external specification
> whatever. In that case, documenting [{'@id': S, P: O}] is only
> slightly more complex than documenting {S: { P: O}}. However, we would
> really rather not document this ourselves - we would rather point to a
> specification. For {S: { P: O}} we can easily point to the RDF/JSON
> spec - the whole spec almost fits on a page, and since RDF/JSON really
> has no options, there is little to say about our usage patterns of it.
> By contrast, JSON-LD is a complex specification with many options. If
> we were to reference the JSON-LD spec in our API documentation, not
> only would we be referencing a relatively large and complex spec, we
> would then have to add yet more information to document the very
> restricted subset we actually use. We get no value from the parts of
> JSON-LD we do not use and we have no interest in allowing clients to
> give us arbitrary JSON-LD, or in producing it, either now or in the
> future. Referencing the JSON-LD spec in our API documentation is not
> an attractive option.

Quite frankly, this surprises me and makes me wonder why you use RDF at all?
By defining this, you tightly couple the clients to the server which
normally you try to prevent by all means. Given that you use RDF, a client
shouldn't depend on the structure of your JSON data but on its semantics, on
the raw data. I understand why you wanna do it (it is just more convenient
to hard-code a client against a rigid structure) but it's really a bad
practice. In that case you could just as well use a highly optimized
proprietary JSON format (as most APIs do these days).

I also don't really by your arguments about documenting your "usage
patterns". Unless I missed something this is flattened/expanded form. Since
JSON-LD's media type has a profile parameter [3] you can even make this
visible on the HTTP level.

I acknowledge that RDF/JSON is much simpler because it has much fewer
features but is that really an advantage in this case?

Does the statement "we would rather point to a specification" mean that you
wanna propose to put RDF/JSON on the REC track?


> I hope this helps explain our usage of RDF/JSON and JSON-LD and our
> experiences with them.

Yes, it definitely helped me to understand some of the challenges you have
to deal with. But nevertheless, it didn't convince me that there's need for
a second format. I would much rather hope that all RDF/Linked Data in JSON
converges to JSON-LD. It has a higher initial cost but having a single
format for which tools and libraries are being developed will pay of in the
long term.

None of the things you described is a fundamental problem IMO. I certainly
don't wanna belittle the challenges you had to deal with and also understand
that for certain, very specific use cases RDF/JSON is a better fit. You will
find a slightly better solution for almost every use case that is specific
enough. The important thing is that we have a format which is able to
address all use cases. It is also critical that it feels "native" for Web
developers - RDF/JSON certainly does not.

Having a single format obviously means that it can't be the most efficient
for all scenarios. I'm convinced that tooling will be build to address those
"shortcomings" in the future. Framing [4] is already a first step in that
direction. Let's try to not divide the community by having to competing
standards.. we are still developers.. so having to write a few lines of code
more in certain scenarios isn't that much of a problem. I'm sure libraries
will emerge that do that for you very soon.


Cheers,
Markus


[1] http://www.w3.org/TR/json-ld/#expanded-document-form
[2] http://www.w3.org/TR/json-ld/#flattened-document-form
[3] http://www.w3.org/TR/json-ld/#iana-considerations



--
Markus Lanthaler
@markuslanthaler

Received on Saturday, 27 April 2013 16:03:58 UTC