W3C home > Mailing lists > Public > public-rdf-wg@w3.org > March 2011

Re: [JSON] PROPOSAL: Syntax structure should be object-based

From: Manu Sporny <msporny@digitalbazaar.com>
Date: Thu, 17 Mar 2011 12:18:56 -0400
Message-ID: <4D823470.2050004@digitalbazaar.com>
To: RDF WG <public-rdf-wg@w3.org>
On 03/17/11 05:00, Andy Seaborne wrote:
>>> This matters when writing RDF back to the web, not just JSON-emitted
>>> data viewed as RDF.
>>
>> It does, very much so - but I still don't understand the problem that
>> you keep alluding to. Do you have an example?
> 
> JS app goes:
> 
> {
>    "name": "Manu" ,
>    "location" :  [ -122.51368188, 37.70813196 ]
> }
> 
> because that is what the app typically deals in.

Ahh, ok, now I understand what you're driving at.

Yes, this is a good example of JSON that cannot map to RDF cleanly
without some sort of programmatic post-processing. Note that there is
meaning inherent in the structure and that the meaning is specific to
the application. That is "location" could be describing lat/long, or it
could be describing points along a one dimensional line. I don't think
that this is a problem that can be solved simply - at least, we tried
and failed several times.

We attempted to tackle this problem with JSON-LD, but the problem space
became huge and complicated the syntax far too much. In the end, we
decided not to support use cases like this because it required a complex
transformation language. I can go through all of the things that we
tried, but it really became far too complicated for something that
should be usable by ordinary "Web developers".

Rather than support this use case, we thought it would be best to just
not support converting "location" to something that made sense in RDF
via JSON-LD. That is, the parser would silently ignore any term that
wasn't in the prefix map. I know there might be some grumbling about
this, but the decision is between three fairly nasty choices:

1. Define some sort of complicated transformation language for JSON.
2. Throw an error when a key cannot be mapped.
3. Silently ignore keys that cannot be mapped.

> How does the reverse process get applied so it is underastandable as
> RDF. What requirements and expectations on the receiver (graph store)
> are there?  

I think there are some assumptions being made here that should not be
made. Why does the process need to be reversible? Why are we assuming
that the storage mechanism is a graph store? Why aren't the requirements
and expectations application/standard specific?

I'm assuming that the main thing you want is for the output format to be
the same as the input format, but I don't see why that necessarily has
to be the case. For example, Twitter supports the following output
formats: JSON, XML, RSS, ATOM. However, it only supports XML and JSON as
input.

Rarely do the objects that are sent to the service (typically query
objects) map directly to the objects that are sent out from the service
(typically tweets, users, timeline, search output, etc.). That doesn't
mean that it can't happen - just that most of these more popular JSON
REST systems aren't designed like that.

> Has the app write[r] now got to get involved in all that RDF
> stuff after all?

It depends entirely on the Web Service. If the web service is written
such that pure JSON is published and received and a mapping is provided
via something like a JSON-LD context, then no - the App writer can
continue using JSON just like they have been. If the App writer would
like to extract RDF from the JSON, they can use the JSON-LD context to
do so.

If the Web service is written such that it takes advantage of CURIEs and
type coercion and all of the other goodies that JSON-LD has in it, then
the app writer would need to at least use the jsonld.parse() API to
convert the output of the web service to an object with which they're
more familiar.

-- manu

-- 
Manu Sporny (skype: msporny, twitter: manusporny)
President/CEO - Digital Bazaar, Inc.
blog: Payment Standards and Competition
http://digitalbazaar.com/2011/02/28/payment-standards/
Received on Thursday, 17 March 2011 16:19:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:04:04 UTC