On 29 April 2013 03:15, Markus Lanthaler <> wrote:

> On Sunday, April 28, 2013 1:43 AM, Peter Ansell wrote:
> > I have found it to be an advantage in dealing with arbitrary triples
> > using RDF/JSON. JSON-LD is great for annotating fixed structures if
> > you have a relatively fixed JSON API and you want to access it
> > directly using JSON without modifying it too much, where RDF/JSON can
> > represent any triples without relying on different methods for
> > slightly different use cases.
> And that doesn't apply to JSON-LD in expanded flattened form? See example
> 8 in the API spec ( Is it
> because the array is not indexed by subject or is there another reason?
It is only an issue for me when it changes the RDF triples that are
expected. As long as clients can always request documents without external
contexts things should be okay, until a server finds a document submitted
by a client that relies on the server getting through a firewall to access
an external context that may be outside of its security domain before it
can attempt to produce RDF triples from the document. In an RDF workflow
the fact that the main JSON-LD document can be parsed as JSON is moot if
the step from a parsed JSON object to RDF is context-sensitive and
dependent on environmental factors such as internet connectivity.

> > > Does the statement "we would rather point to a specification" mean
> > > that you wanna propose to put RDF/JSON on the REC track?
> >
> > Before RDF/JSON goes too much further I would like to propose an
> > addition to make it optionally a quads format. Joshua Shinavier
> > extended the RDF/JSON format when it was implemented for SesameTools a
> > while back to include an extra optional graph element attached to each
> > object, and it has been working well for me, and as of Sesame-2.7.0
> > there is a parser and writer that both use this extended graph-aware
> > RDF/JSON format.
> >
> > I have also developed, and am willing to contribute to W3C, a small
> > initial compliance test suite for RDF/JSON in order to verify the
> > Sesame implementation if/when it gets more attention from the RDF
> > workgroup.
> That's a different topic altogether. Let's not discuss that in this thread.
Sorry, the thread is named "RDF/JSON", hence my confusion.

> > RDF/JSON has a very small initial cost for existing RDF developers,
> > and it may actually help new RDF developers understand the graph model
> > underlying RDF.
> I doubt that. It may help new RDF developers to understand triples but
> nothing beats Turtle (or N-Triples) in that regard.
That is one opinion.

> > JSON-LD has a high initial cost for both existing and new RDF
> > developers, although once you get to using it, and you can easily
> > merge the context into a document in your head, then you may find it
> > useful.
> >
> > It is mainly for those reasons I doubt that RDF/JSON is going away
> > anytime soon for users who want to serialise arbitrary RDF graphs to
> > JSON. For those that want to make or annotate existing JSON APIs JSON-
> > LD is the solution to their problems, but it isn't a very clean
> > generic solution in terms of RDF.
> >
> > Can all JSON-LD documents can be parsed consistently to RDF triples
> > without having to retrieve a context document from the internet to
> > parse along with an existing document?
> Yes, all expanded documents. And you can request expanded documents using
> the profile parameter when doing conneg.

Sorry, I didn't realise that the profile parameter, and all of the profile
options, needed to be supported by everyone.

The only place they seem to be discussed is in the IANA section and it only
specifies that clients "SHOULD" use the given URIs if they want to request
that profile. At least in that section it doesn't say what the mandatory
support for client-requested profiles needs to be for a complying JSON-LD
server. Ie, what is the fallback profile if they don't recognise the list
of profiles.

> What does the sentence "it isn't a very clean generic solution in terms of
> RDF" mean? That you can't "see" the triples because they are collected in a
> single object?
I wasn't referring to the triples being in a single subject. I was just
referring to the usecase that requires minimal, or no additional changes,
to existing JSON documents to turn them into RDF, which is a usecase for
JSON-LD where it isn't a design requirement for RDF formats that aim to
produce a single document for each transaction.

If you can always request expanded documents consistently and have the
server consistently deliver that profile is going to be returned then it is
fine for dynamic requests. I don't quite see how the mandatory profile
support will help with the usecase of having minimal changes to existing
JSON APIs though, as all JSON-LD compatible servers will need to be
reimplemented to support the entire JSON-LD stack if the client is able to
request a profile and have it as a mandatory "MUST" requirement for the

> > > None of the things you described is a fundamental problem IMO. I
> certainly
> > > don't wanna belittle the challenges you had to deal with and also
> understand
> > > that for certain, very specific use cases RDF/JSON is a better fit.
> You will
> > > find a slightly better solution for almost every use case that is
> specific
> > > enough. The important thing is that we have a format which is able to
> > > address all use cases. It is also critical that it feels "native" for
> Web
> > > developers - RDF/JSON certainly does not.
> >
> > That is one usecase. I think RDF/JSON and JSON-LD can work well for
> > RDF developers and traditional web developers respectively, without
> > being biased to one just because it uses the same base language. I
> > completely disagree with your assertion that it is important that a
> > single JSON RDF serialisation needs to fit with everyones usecases, as
> > it encourages complexity, which can only be a bad thing in the long
> > term.
> This has been discussed at length half a year ago. I still think endorsing
> two "competing" standards is bad for a number of reason and if just because
> it will confuse developers. Is it helpful for developers to have both RDFa
> and Microdata?
I don't really see how they are necessarily competing, other than that they
both use the same parsers (but not the same post-parse processors). The
fact that RDF/JSON defines itself so simply for a single purpose places it
in an entirely different category to RDF/XML, Turtle and JSON-LD that try
to be everything for everyone. It should be comparable to SPARQL Results
XML (with mapping to RDF in the same way as TriplesInJSON [1]) and SPARQL
Results JSON (TriplesInJSON) and N-Triples that have very simple goals, but
do not attempt to be as terse as possible, and hence are very well
supported for machine-to-machine (ie, RDF-triples-through-to-RDF-triples)
models. Those formats are for the same reason not well supported for
human-to-machine or machine-to-human as they are naturally verbose and have
no way to shortcut the verbosity, making them easy to process using simple
workflows for computers, but difficult for humans to work with until they
transform the document completely to RDF triples.

> Everyone is free to expose data as he would like. RDF/JSON is one option
> to do so. There a many ways to define something very similar to RDF/JSON
> which is more efficient in a number of use cases. Nevertheless it makes
> sense to have as few as possible standardized formats. Interoperability
> will become much easier. The difficult decision whether to support both
> RDF/JSON and JSON-LD or just one (and which one) will go away if there's
> just one standard.
I doubt that RDF/JSON will go away just because it isn't standardised, but
JSON-LD may end up being the preferred option once enough people understand
it to explain it to everyone else. In the end, there are two distinct
content types that are in wide use already (application/rdf+json and
application/ld+json), and people can pick and choose which format they wish
with conneg preferences and the server can deliver them a document in any
of the RDF formats they request. If JSON-LD processing takes more time to
deliver and process to RDF triples, compared to RDF/JSON or Turtle or
N-Triples, people may put the RDF/JSON content type higher on their
preference list, but that doesn't mean that JSON-LD is not a "success" for
its specialist usecases.

One thing that I have not understood has been why proponents of different
formats seem to think that their solution has to be the best in every
usecase, without competitors, or else it will not be a "success". Humans do
not handcode large N-Triples documents for example, and yet they are the
preferred archival format (once they are compressed using
pkzip/gzip/bzip/etc.) as they can be loaded line by line, in parallel if
necessary. In a similar way, RDF/JSON is completely transparent to me
personally as I rely on RDF interfaces as either end of the connection to
interpret the document the same way no matter what format it is delivered
in. At least when I was first experimenting with JSON-LD, the fact that the
JSON-LD-to-RDF process could fail after successfully retrieving the main
document, because the context documents were not available, did not appeal
to me at all, as I was (and still am) only interested in the end result of
RDF statements.

For what it is worth, it is no trouble at all to support both RDF/JSON and
JSON-LD at the server side if clients are solely focused on consuming RDF
triples and they have a reasonable framework for transparently generating
RDF statements from different formats. If an RDF server or client needs
substantial changes to specifically support JSON-LD, to handle different
profiles etc., then the specification may be too complex for many people.
As long as users can transparently parse both RDF/JSON and JSON-LD to RDF
triples they will not notice though and they could continue requesting as
many RDF formats as they can parse and relying on the server to send a
result in one of the formats that it supports. The JSON-LD specific
use-cases seem to be to handle cases where people don't actually want to
either produce or consume RDF.

> > One of the issues for me is that JSON-LD can be represented in so many
> > ways that it is difficult to easily process any one of the
> > serialisations. Is there a quick and/or standard way of identifying
> > which version/profile of JSON-LD you are looking at?
> Yeah, look at the profile parameter. Otherwise just transform the document
> into the form you desire. The algorithms are there and there already exist
> implementations for most major languages.
Sounds promising once the software support is there and people understand
the extra steps they need to do for JSON-LD documents to get them to RDF

> > Arguing that JSON-LD will be able to be parsed as RDF triples by a
> > select number of highly developed libraries kind of defeats the point
> > of the argument for it as *the* universal, easy to use, JSON
> > serialisation of RDF. An RDF/JSON parser is massively simpler than a
> > JSON-LD parser, and can be written in a few lines of javascript on the
> > fly if necessary. For that matter, all of the other JSON RDF
> > serialisations that have been proposed can be parsed in a few lines of
> > javascript, or in some cases using SPARQL Results JSON parsers.
> There's no parsing. All JSON-based formats are parsed as JSON - so it is
> exactly the same. Are you talking about transforming or querying the data?
Sorry, I meant the part between JSON parsing and creating RDF statements.
In JSON-LD the parser may not be able to create RDF statements while
streaming even after the JSON parse is complete, as it may need to request
the context as a separate JSON document using an HTTP client, and the
context may not be defined until the end of the document, given that JSON
object keys are unordered. The fallback solution to that seems to be that
you need to be the one requesting the document upfront so that you can
request the expanded flattened profile.

In terms of querying the results of a JSON parse, it is possible to have a
simple query for a particular subject-predicate-object pattern that works
on any raw JSON object immediately after a single JSON document parse has
completed for an RDF/JSON document. A JSON-LD processor, however, may still
need to modify the resulting JSON object after parsing is complete until
the in-memory structure is normalised enough to perform direct queries on
it, which could be a curse and/or a gift depending on your viewpoint.



Received on Monday, 29 April 2013 01:02:59 UTC