Fwd: RDF/JSON

Did I mess up again and only send this to Markus? I am clearly not smart
enough to use the new Gmail UI.

Best wishes, Martin

---------- Forwarded message ----------
From: Martin Nally <martin.nally@gmail.com>
Date: Mon, Apr 29, 2013 at 11:22 AM
Subject: Re: RDF/JSON
To: Markus Lanthaler <markus.lanthaler@gmx.net>


Based on our experience implementing applications, this is the advice I am
giving people at IBM, and anyone else who asks me.

1) If you are writing RDF-aware clients and servers, and you are looking
for a data format for the interface between them, use RDF/JSON. This is
true whether or not you were previously interested in JSON. Think of
RDF/JSON as the natural way for RDF-aware programs to talk to each other,
regardless of other technology choices - it's the RDF format for
programmers. The only exception I can think of to this rule would be if one
of the important clients or servers is written in a specialized programming
language (like maybe BPEL) that does not have support for the standard
dictionary and array data structures.

2) If you have an existing JSON format, and you want to extend it so it is
more self-describing for RDF-aware clients, consider using JSON-LD. An
alternative is to offer two different media types - your current one and
RDF/JSON

3) If you are designing a new RDF-aware server and you want to expose it to
both RDF-aware and RDF-unaware clients, offer two different media types -
RDF/JSON plus "web JSON". Offering JSON-LD will not be popular with either
the web crowd or the RDF crowd, both of whom will view it as unnecessarily
complex from their point of view.

It is possible I will change my mind as I learn more, but this is what
makes sense to me based on my current experience.

Best wishes, Martin


On Mon, Apr 29, 2013 at 6:01 AM, Markus Lanthaler
<markus.lanthaler@gmx.net>wrote:

> On Monday, April 29, 2013 3:03 AM, Peter Ansell wrote:
> > On 29 April 2013 03:15, Markus Lanthaler wrote:
> > > On Sunday, April 28, 2013 1:43 AM, Peter Ansell wrote:
> > >
> > > > I have found it to be an advantage in dealing with arbitrary triples
> > > > using RDF/JSON. JSON-LD is great for annotating fixed structures if
> > > > you have a relatively fixed JSON API and you want to access it
> > > > directly using JSON without modifying it too much, where RDF/JSON can
> > > > represent any triples without relying on different methods for
> > > > slightly different use cases.
> > >
> > > And that doesn't apply to JSON-LD in expanded flattened form? See
> > > example 8 in the API spec (http://www.w3.org/TR/json-ld-
> > > api/#flattening). Is it because the array is not indexed by subject or
> > > is there another reason?
> >
> > It is only an issue for me when it changes the RDF triples that are
> > expected. As long as clients can always request documents without
> > external contexts things should be okay, until a server finds a
> > document submitted by a client that relies on the server getting
> > through a firewall to access an external context that may be outside
> > of its security domain before it can attempt to produce RDF triples
> > from the document. In an RDF workflow the fact that the main JSON-LD
> > document can be parsed as JSON is moot if the step from a parsed JSON
> > object to RDF is context-sensitive and dependent on environmental
> > factors such as internet connectivity.
>
> Yeah, a client can request it using the profile parameter of
> application/ld+json. In the end that's not much different from using
> another media type.
>
>
> > Sorry, I didn't realise that the profile parameter, and all of the
> > profile options, needed to be supported by everyone.
> >
> > The only place they seem to be discussed is in the IANA section and it
> > only specifies that clients "SHOULD" use the given URIs if they want
> > to request that profile. At least in that section it doesn't say what
> > the mandatory support for client-requested profiles needs to be for a
> > complying JSON-LD server. Ie, what is the fallback profile if they
> > don't recognise the list of profiles.
>
> They don't need to be supported by everyone. Just as not everyone needs to
> support all media types. We are specifying a data format and thus server
> behavior is out of scope. That being said, the HTTP specification defines
> quite clearly what should happen. If a server doesn't recognize the profile
> it can either ignore or respond with a 406 Not Acceptable. That's again not
> much different from requesting a media type that isn't recognized. If the
> server however recognizes the profile, it will signal it in the
> Content-Type header and a client just needs to look there.
>
>
> > > What does the sentence "it isn't a very clean generic solution in
> > > terms of RDF" mean? That you can't "see" the triples because they are
> > > collected in a single object?
> >
> > I wasn't referring to the triples being in a single subject. I was
> > just referring to the usecase that requires minimal, or no additional
> > changes, to existing JSON documents to turn them into RDF, which is a
> > usecase for JSON-LD where it isn't a design requirement for RDF
> > formats that aim to produce a single document for each transaction.
> >
> > If you can always request expanded documents consistently and have the
> > server consistently deliver that profile is going to be returned then
> > it is fine for dynamic requests. I don't quite see how the mandatory
> > profile support will help with the usecase of having minimal changes
> > to existing JSON APIs though, as all JSON-LD compatible servers will
> > need to be reimplemented to support the entire JSON-LD stack if the
> > client is able to request a profile and have it as a mandatory "MUST"
> > requirement for the response.
>
> As I already said above: We are not specifying server behavior because we
> are not defining a protocol but a data format/a media type. You can't
> require a server to support RDF/JSON either. Either it does or it doesn't.
> All you can do is to try.
>
>
> > > > That is one usecase. I think RDF/JSON and JSON-LD can work well for
> > > > RDF developers and traditional web developers respectively, without
> > > > being biased to one just because it uses the same base language. I
> > > > completely disagree with your assertion that it is important that a
> > > > single JSON RDF serialisation needs to fit with everyones usecases,
> as
> > > > it encourages complexity, which can only be a bad thing in the long
> > > > term.
> > >
> > > This has been discussed at length half a year ago. I still think
> > > endorsing two "competing" standards is bad for a number of reason and
> > > if just because it will confuse developers. Is it helpful for
> > > developers to have both RDFa and Microdata?
> >
> > I don't really see how they are necessarily competing, other than that
> > they both use the same parsers (but not the same post-parse
> > processors).
>
> They are competing because they will be released at almost the same time
> by the same group and try to achieve roughly the same: provide a
> serialization for RDF in JSON (although they follow quite different
> approaches).
>
> Developers that haven't been following the development closely will
> naturally be confused and have to decide which one to support. I think RDFa
> & Microdata illustrated this quite nicely.
>
>
> > The fact that RDF/JSON defines itself so simply for a
> > single purpose places it in an entirely different category to RDF/XML,
> > Turtle and JSON-LD that try to be everything for everyone. It should
> > be comparable to SPARQL Results XML (with mapping to RDF in the same
> > way as TriplesInJSON [1]) and SPARQL Results JSON (TriplesInJSON) and
> > N-Triples that have very simple goals, but do not attempt to be as
> > terse as possible, and hence are very well supported for machine-to-
> > machine (ie, RDF-triples-through-to-RDF-triples) models. Those formats
> > are for the same reason not well supported for human-to-machine or
> > machine-to-human as they are naturally verbose and have no way to
> > shortcut the verbosity, making them easy to process using simple
> > workflows for computers, but difficult for humans to work with until
> > they transform the document completely to RDF triples.
>
> I repeat myself but I still believe that the same is true for
> flattened/expanded JSON-LD (module indexing by subject).
>
>
> > > Everyone is free to expose data as he would like. RDF/JSON is one
> > > option to do so. There a many ways to define something very similar to
> > > RDF/JSON which is more efficient in a number of use cases.
> > > Nevertheless it makes sense to have as few as possible standardized
> > > formats. Interoperability will become much easier. The difficult
> > > decision whether to support both RDF/JSON and JSON-LD or just one (and
> > > which one) will go away if there's just one standard.
> >
> > I doubt that RDF/JSON will go away just because it isn't standardised,
> > but JSON-LD may end up being the preferred option once enough people
> > understand it to explain it to everyone else.
>
> And that's completely fine. I just don't want this group to send out wrong
> signals. I want it to make it clear that JSON-LD is the preferred solution
> for exchanging RDF over JSON.
>
>
> > In the end, there are
> > two distinct content types that are in wide use already
> > (application/rdf+json and application/ld+json), and people can pick
> > and choose which format they wish with conneg preferences and the
> > server can deliver them a document in any of the RDF formats they
> > request.
>
> Again, that's completely fine. People can even go and invent their own
> types or profiles. It is impossible (and also not desirable) to prevent
> that.
>
>
> > If JSON-LD processing takes more time to deliver and process
> > to RDF triples, compared to RDF/JSON or Turtle or N-Triples, people
> > may put the RDF/JSON content type higher on their preference list, but
> > that doesn't mean that JSON-LD is not a "success" for its specialist
> > usecases.
>
> People already familiar with RDF and RDF/JSON are not the target group
> that of the JSON task force. People that aren't familiar at all with RDF
> are. If we present them two options, they will have to decide which one to
> use which also includes understanding both.
>
>
> > One thing that I have not understood has been why proponents of
> > different formats seem to think that their solution has to be the best
> > in every usecase, without competitors, or else it will not be a
> > "success".
>
> That's definitely not the case for me. I did acknowledge several times in
> this thread that RDF/JSON is much simpler and in some use cases more
> efficient. I'm arguing that you can always build a format which is more
> efficient for a given use case. What about binary formats for example?
>
> The thing that makes me nervous is that the group sends wrong signals and
> confuses web developers if it endorses two competing formats. Competing in
> the sense that both are RDF serializations in JSON.
>
>
> > Humans do not handcode large N-Triples documents for
> > example, and yet they are the preferred archival format (once they are
> > compressed using pkzip/gzip/bzip/etc.) as they can be loaded line by
> > line, in parallel if necessary. In a similar way, RDF/JSON is
> > completely transparent to me personally as I rely on RDF interfaces as
> > either end of the connection to interpret the document the same way no
> > matter what format it is delivered in. At least when I was first
> > experimenting with JSON-LD, the fact that the JSON-LD-to-RDF process
> > could fail after successfully retrieving the main document, because
> > the context documents were not available, did not appeal to me at all,
> > as I was (and still am) only interested in the end result of RDF
> > statements.
>
> Then I'm wondering why you need RDF/JSON if you have already a number of
> formats to choose from!? Doesn't it simplify your life if there are fewer
> formats to support?
>
>
> > For what it is worth, it is no trouble at all to support both RDF/JSON
> > and JSON-LD at the server side if clients are solely focused on
> > consuming RDF triples and they have a reasonable framework for
> > transparently generating RDF statements from different formats.
>
> This sentence is contradicts itself. If you have a "reasonable framework
> for transparently generating RDF statements from different formats" it's
> obvious that you can support multiple formats. But building and setting up
> such a framework is by no means simple.. and with every format you add,
> your framework will become more complex.
>
> Apart from Martin's very valuable feedback that JSON-LD is sometimes
> difficult to process because it isn't indexed by subject, I haven't heard
> of any missing features or problems of JSON-LD so far. The ability to use a
> different media type instead of a profile doesn't sound like a compelling
> argument to justify the standardization of a second format to me.
>
>
> Cheers,
> Markus
>
>
>
> --
> Markus Lanthaler
> @markuslanthaler
>
>

Received on Tuesday, 30 April 2013 17:37:29 UTC