Re: Fwd: RDF/JSON

I probably got a bit carried away in trying to explain why I'm not
interested in JSON-LD and am interested in RDF/JSON instead. I'm not
motivated to argue that JSON-LD is bad - if you guys say it's good then
fine, it's good. But it's not interesting to me, for the reasons I tried to
explain, and RDF.JSON is interesting to me, for reasons I also tried to
explain. The popular vote will decide the fate of JSON-LD over time. What I
do want to avoid is having us prematurely declare that JSON-LD is the only
specification we need and that uses of RDF/JSON should be converted to some
profile of JSON-LD. I think this thread provides evidence that there is a
group of people who believe that would be wrong and I'd like to see the
popular vote decide that one too.

Regards, Martin


On Tue, Apr 30, 2013 at 11:28 PM, Martin Nally <martin.nally@gmail.com>wrote:

>
> Rats, I forgot to copy the mailing list again
>
> ---------- Forwarded message ----------
> From: Martin Nally <martin.nally@gmail.com>
> Date: Tue, Apr 30, 2013 at 11:25 PM
> Subject: Re: Fwd: RDF/JSON
> To: Dave Longley <dlongley@digitalbazaar.com>
>
>
> >> You'll need some validation code somewhere.
>
> Yes, good point, Dave. As I said in a previous email, we have about 200
> lines of helper functions that we use with RDF/JSON. That doesn't quite
> meet my zero time/size criteria which I admit were exaggerated, but still
> the heavy lifting is done by JSON and the language run-times, and what we
> have to do is much less.
>
>
> >> For instance, if you did use JSON-LD, maybe you'd only accept flattened
> form.
>
> Yes, indeed, and as I said in my previous note, we started by using what
> JSON-LD calls "expanded" format (more exactly a kind of hybrid between
> expanded and framed). This worked OK, although RDF/JSON works even better.
> Whether we were really using JSON-LD is arguable, since we only supported a
> very specific and limited JSON-LD format. What we accepted was valid
> JSON-LD, but it is not obvious to me what the value is in being able to
> make that claim, since the amount of JSON-LD we supported is much less than
> the amount we did not. With RDF/JSON, we didn't have to make any such
> restrictions.
>
>
> >> Plus, when I wanted to, I got to use all of the JSON-LD features that
> make programming against generic in-memory objects *far* more natural
>
> I assume then that you implemented a much more substantial amount of
> JSON-LD than we have. I'm not sure this really has much to do with JSON-LD
> - there are lots of programmer-friendly functions we could wrap around
> RDF/JSON too. We haven't done this, partly for lack of time and effort, but
> also because I'm a passionate minimalist. The only good code is the code
> that isn't there (another hyperbole, but again with a core of truth, IMO).
>
> Regards, Martin
>
>
> On Tue, Apr 30, 2013 at 9:36 PM, Dave Longley <dlongley@digitalbazaar.com>wrote:
>
>> On 04/30/2013 07:12 PM, Martin Nally wrote:
>>
>>> For RDF-aware people, JSON-LD is also annoying - it is much more
>>> difficult to parse than simple RDF/JSON. If every programming language had
>>> a JSON-LD library that had no bugs, loaded in zero time, took zero space
>>> and had an API everyone loved, this might not be an issue, but those things
>>> are not true.
>>>
>>
>> That sounds like any unreasonable list of requirements for any technology.
>>
>> Also, keep in mind that parsing RDF/JSON is not as simple as
>> JSON.parse(). That doesn't necessarily yield valid RDF/JSON, for instance:
>> JSON.parse({"i": {"am": "invalid RDF/JSON"}}). You'll need some validation
>> code somewhere. Does every programming language have an RDF/JSON validation
>> library that has no bugs, loads in zero time, takes zero space, and has an
>> API that everyone loves? Did you check Sartre? :)
>>
>> Instead, maybe all you need is a decent RDF/JSON validation library for
>> every programming language that your system is using or intends to
>> (reasonably) interoperate with. Even that might not be necessary, if you
>> have clients that only transmit data and deal with HTTP response status
>> codes. Of course, if you were in this position, then you could use any
>> serialization format that met these same requirements. The only question is
>> what you get for free and what you don't (and how important those things
>> are) -- and that may just be determined by how you decide to model your
>> data.
>>
>> Since this is your system, you can also decide what restrictions you want
>> to place on the data. For instance, if you did use JSON-LD, maybe you'd
>> only accept flattened form. Then you could use a validator for that instead
>> of RDF/JSON. Any data you exported would still be fully interoperable with
>> anyone who could accept JSON-LD. You'd have the same restrictions your
>> system has right now with RDF/JSON -- in that that's the only thing you can
>> accept (JSON formatted in a specific way). You could also model your data
>> using JSON-LD's @index feature or create simple subject maps for your data
>> when it's received, if that's something you want.
>>
>> Anyway, my point is that some of what you said came across as a bit
>> hyperbolic; I don't think using JSON-LD instead of RDF/JSON is actually as
>> annoying as you make it out to be. From my perspective it seems more like
>> this sort of feeling: "Bummer, I wanted the data keyed by subject. Now I'll
>> have to write a function or use a common-place tool or a feature of JSON-LD
>> to do that for me."
>>
>> I've had that same thought and have had to do it in practice. It wasn't
>> that bad. Plus, when I wanted to, I got to use all of the JSON-LD features
>> that make programming against generic in-memory objects *far* more natural,
>> like using dot-notation, short keys, and arrays instead of bracket-syntax,
>> IRIs, and linked lists.
>>
>> --
>> Dave Longley
>> CTO
>> Digital Bazaar, Inc.
>>
>>
>
>

Received on Wednesday, 1 May 2013 03:40:15 UTC