W3C home > Mailing lists > Public > public-rdf-wg@w3.org > April 2011

Re: [JSON] user segments, version 2

From: Andy Seaborne <andy.seaborne@epimorphics.com>
Date: Mon, 11 Apr 2011 22:49:09 +0100
Message-ID: <4DA37755.90304@epimorphics.com>
To: public-rdf-wg <public-rdf-wg@w3.org>


On 08/04/11 08:46, Ivan Herman wrote:
>
> On Apr 8, 2011, at 05:35 , Sandro Hawke wrote:
>
>> After some more thought, I think my user segments matrix [1] is a poor
>> fit for the issues we need to understand.   I suggest a somewhat
>> different approach here.
>>
>> To start, we can simplify the space by ignoring the rows, ignoring the
>> different kinds of producers.  This is reasonable if we think that data
>> producers will use whatever formats are demanded by their intended
>> consumers.   Yes, some may be recalcitrant in some way, but I think if
>> they have an option that makes all their users happy, they will adopt
>> it.  So we can just focus on what will make the users happy.
>>
>> So that leaves us with the same three groups of users, which I'll
>> reframe slightly, and one more I neglected earlier:
>>
>> Group A -- Developers who want a simple JSON view of their application
>>            data.  These are the folks using all the popular json APIs,
>> 	   such as twitter, facebook, flickr, etc, etc.
>>
>> Group B -- Developers who want a simple JSON view of arbitrary RDF
>>       	   triples.  These are the folks happy to use things like
>>            Talis' RDF JSON, JTriples, etc.
>>
>> Group C -- Developers who want RDF triples, but are willing to use a
>>       	   library or API to get it.  This group would be satisfied by
>>       	   something that worked for Group B, since a library could
>>       	   easily extract the triples.
>>
>> Group D -- Developers who want a simple JSON view of a limited subset
>>            of RDF triples, such a tree-shaped data without any
>>            language tags or dates.  This group would also be
>>            satisfied by something that worked for Group B; they just
>>            don't need as complete a format.
>>
>> Groups A, B, and C are the same as in the matrix.  Group D is new.
>
> I am not sure this is what you meant, but 'D' probably also means that user in this category are not really willing to use a specialized API. I guess they want to be able to make use of the data through a json.parse() command right away and they are not willing (at first) to make the extra step of using an extra library.
>
>
>>
>> Now, with this simpler view, we can see where the real complication is:
>> we'd like to address several of these groups at once.  Ideally, we'd
>> like a single JSON format that works for everyone.  If it worked for
>> Group A, it would keep the current users happy (and twitter, facebook,
>> etc wouldn't mind adopting it). If it also worked for Group B (call
>> this an "AB" solution), it follows that it would also work for C and D
>> and everyone would be happy.  But I don't think there are any AB
>> solution.
>>
>> It's somewhat easier to imagine "AC" and "AD" solutions.  I think
>> mostly the debates are which of B, AC, and AD we can or should do.
>> Maybe ACD is possible, too.     I *think* Manu is pushing for an AC
>> solution
>
> I believe JSON-LD's goal (ie, Manu's:-) would be more something like ACD (not sure it reaches it, though). Ie, a format that, at least in simple data, *can* be used by users in group D.
>
>
>> and Andy and Steve are pushing back, since their current users
>> are mostly in Group B.

I am thinking approx. BC and also data publishers.
>
> I think you are right.
>
> Others have said it before: I think we have, potentially, two JSON+RDF things here, and that is what we may have to do.

Don't skip all issues of the data publishers completely.  Sandro's 
analysis said that the data publishers will do what ever is necessary. I 
think that's only partly true.  If it's augmenting a current JSON output 
process, I can believe that (evidence would be good though).

A RDF data publisher, who is already publishing to RDF clients via 
conneg, wants to just add a application/rdf+json (e.g. tabulator).

In particular the serialization should be automatic, not custom for 
different data consumers, in the same way that Turtle is.  All the 
serialization can assume is RDF triples and some namespace prefixes (in 
practice, I think all, or nearly all systems, provide some sort of 
prefix management).  Prefixes for some IRIs may not already exist - if 
necessary, as in RDF/XML, they can be generated but not pretty.

Doing something special for JSON, like needing specific-for-JSON, or 
specific-for-the-data customization, would be a big hindrance.  The 
speed of processing by using JSON is the attraction.  Direct, but 
RDF-centric navigation is a bonus.

So I can see:

1/ JSON-as-RDF is the on-ramp for JSON-data.
2/ Exchange between RDF app and RDF publisher is this case and extends 
to the RDF API work.
3/The off-ramp, RDF to "normal" JSON includes Linked Data API (see gov WG).

Different design focuses lead to different answers. Trying to cover too 
much in one design means it is good for one but compromises on another 
or even on all of them, and becomes complicated.


Being split across WGs is inconvenient but not an insurmountable 
barrier.  In the pre-charter blog commentary I saw, then 2 didn't get 
much support as necessary but it did appear in the survey.

	Andy

>
>
> Ivan
>
>>
>> FWIW JRON aimed for AB, but I don't think it quite hits the mark.  In
>> particular, its JSON is perhaps too cluttered to be a real Group-A
>> solution (although for normal JSON data it's pretty clean), and for
>> some things (like namespaces) the consumer code need too much code to
>> work for Group B.
>>
>>     -- Sandro
>>
>> [1] http://www.w3.org/2011/rdf-wg/wiki/JSON_User_Segments
>>
>>
>
>
> ----
> Ivan Herman, W3C Semantic Web Activity Lead
> Home: http://www.w3.org/People/Ivan/
> mobile: +31-641044153
> PGP Key: http://www.ivan-herman.net/pgpkey.html
> FOAF: http://www.ivan-herman.net/foaf.rdf
>
>
>
>
>
Received on Monday, 11 April 2011 21:49:39 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 16:25:41 GMT