W3C home > Mailing lists > Public > public-linked-json@w3.org > August 2012

JSON-LD Telecon Minutes for 2012-08-14

From: Manu Sporny <msporny@digitalbazaar.com>
Date: Tue, 14 Aug 2012 13:18:30 -0400
Message-ID: <502A8866.1040802@digitalbazaar.com>
To: Linked JSON <public-linked-json@w3.org>
CC: RDF WG <public-rdf-wg@w3.org>
The minutes from today's call are now available here:


Full text of the discussion follows including a link to the audio

JSON-LD Community Group Telecon Minutes for 2012-08-14

   1. JSON-LD at NoSQL conference
   2. In-memory JSON-LD object representation
   3. @language / @vocab Intransitivity
   4. ISSUE-80: Remove initial context from API spec
   5. ISSUE-150: Use of native types in from/to RDF
   6. ISSUE-151: Context Processing Algorithm Dependency
   1. Do not support an initial context in JSON-LD 1.0.
   2. Continue to support 'useNativeDatatypes' in .fromRDF(),
      specifying how the native type conversion happens. Do not support
      options for overriding each native datatype with a different
Action Items:
   1. Dave Longley to send suggested spec text to modify the
      algorithm to public-rdf-comments@w3.org
   Manu Sporny
   Manu Sporny
   Manu Sporny, Gregg Kellogg, Niklas Lindström, Markus Lanthaler,
   Dave Longley

Manu Sporny is scribing.
Manu Sporny:  Anything else that needs to be on the agenda?

Topic: JSON-LD at NoSQL conference

Gregg Kellogg:  I'm talking at the NoSQL conference, please
   review my slides.
Manu Sporny:  What's the strategy with this talk?
Gregg Kellogg:  Stay away from the algorithms, introducing the
   concepts - expanded form / compacted form - discussion on how to
   get to compacted form. Benefits between expanded and compacted
Gregg Kellogg:  Integration w/ JSON-LD w/ databases like MongoDB
   or CouchDB - use references, use object-level granularity,
   graphify() vs. other approaches to working with the data.
Gregg Kellogg:  Tying into backbone.js - trying to retrieve from
   DB, map to model (backbone), and display. Mostly practical
   experience and how to use JSON-LD in production code.
Gregg Kellogg:  I've built a serialization of schema.org and
   other stuff into JSON-LD serialization... which feeds Backbone
   application that gives cool documentation for all of the API
   points - used constraints quite a bit to give more structure.
   Also built an object editor - object model defined by schema...
   helps create/edit/validate all object instances, which are then
   stores in MongoDB collection.
Gregg Kellogg:  Any app built on any schema, very useful.
Manu Sporny:  Would you be able to put it out as open source at
   some point?
Gregg Kellogg:  I think so.
Gregg Kellogg:  Did lots of the backend in CoffeeScript.
Niklas Lindström:  Yeah, CoffeeScript is nice... also use
   JavaScript. Nicer to work in CoffeeScript.

Topic: In-memory JSON-LD object representation

Gregg Kellogg:  This type of application shows the advantages of
   Linked Data models... I think I'd like .link() instead of
   .graphify() because that's what you're really doing here.
Gregg Kellogg:  When I edit a specific object, I can do a query
   to get back any object that has a relationship to a particular
   object... then tie them together using .link() - pretty useful
   term for what I'm doing.
Niklas Lindström:  I use .connect() in my RDFa experiment.
Niklas Lindström:  In my RDF lab - simple idea - a schema
   vocabulary presentation and then some kind of editor. Did this
   years ago, but would like to try it out w/ JSON-LD approach.
Niklas Lindström:
Niklas Lindström:  I use an HTML5 data-* attribute to store the
   expanded version of the RDFa data...
Niklas Lindström:  I go from RDFa to JSON-LD directly...
Niklas Lindström:  I'll eventually put all of this up on the wiki
   - real goal of this experiment is to put more of the parser logic
   into the DOM API to eventually be able to produce JSON-LD from
   the DOM API.
Niklas Lindström:  I think it'll be pretty small amount of
Niklas Lindström:  Trying to find a balance between logic needed
   to extract @context from DOM and what the current term
Niklas Lindström:  If it has a subject that comes from a subject
   element, these things can be put together simply w/o extracting
   triples... conversion to JSON-LD should be easy.
Niklas Lindström:  The goal of the DOM API is to navigate through
   the DOM using RDF concepts.
Manu Sporny:  Yes, that's good - extracting JSON-LD is very good.
Niklas Lindström:  The Microdata API actually gives you back
   elements carrying the properties, then if you do .toJSON() you
   get the JSON representation at that point. It dawned on me that
   that may be the way forward for the DOM API. You want to be able
   to navigate the HTML elements using RDFsubjects/objects/ etc.
Manu Sporny:  Ok, anything else to add to the Agenda before we

Topic: @language / @vocab Intransitivity

Niklas Lindström:  It might make sense to make @vocab and
   @language intransitive? If you inherit a context from another
   context, you don't necessarily inherit the @language. If you have
   a @context which sets the language, you don't see that. Maybe the
   closest context should affect the language.
Manu Sporny:  my concern is that it complicates things (@language
   acts differently in different places) and prevents certain things
   from working (like being able to set @language in @context).
Niklas Lindström:  Is @vocab in there now?
Gregg Kellogg:  Yes, it's in there now, Markus added it.
Gregg Kellogg:  Also, there is now spec text for a more formal
   description of JSON-LD...
Manu Sporny:  Yes, haven't had a chance to look at it yet, hope
   to do so tsoon.
Gregg Kellogg:  I haven't pinged Andy Seaborne on it yet...
Manu Sporny:  Yes, we need to publish a new Working Draft via RDF
   WG as well - we should do it soon.
Niklas Lindström: https://github.com/RDFLib/rdflib-jsonld
Niklas Lindström:  I updated the JSON-LD implementation for

Topic: ISSUE-80: Remove initial context from API spec

Manu Sporny:  I think we left it at: Gregg and me wanting to put
   it in the spec, Markus and Niklas not wanting to put it into the
Niklas Lindström:  It might be nice to have xsd and rdf
Gregg Kellogg:  Maybe schema as well?
Gregg Kellogg:  I end up setting @vocab to "http://schema.org/"
   for most of what we're doing.
Niklas Lindström:  I don't think we should do that (set @vocab to
Manu Sporny:  It might be helpful to have pre-defined terms, but
   it might also make it more difficult to understand JSON-LD
Niklas Lindström:  We might want to "level the ground" to not
   have implicit prefixes... if we decide to have an initial
   context, it should be able to completely cancel it by setting
   "@context": [null, x, y z]
Gregg Kellogg:  I rarely use "rdf" namespace in JSON-LD. One
   advantage of JSON-LD is that there is support for datatypes. If
   you do any support for datatypes, you need "xsd". I continue to
   forget to include it and I have to go and look up the URI.
Gregg Kellogg:  I think that having "xsd" as a pre-defined term
   in the initial context would be very useful, avoid many easy
Markus Lanthaler:  You don't really use "xsd" that often...
   mostly for dates.
Gregg Kellogg:  I also use it for owl serialization stuff - most
   of the cardinality stuff are unsigned integers.
Gregg Kellogg:  You have to declare something - not a typical
   usage pattern.
Niklas Lindström:  schema.org have defined their own datatypes,
   which are basically string-derived.
Gregg Kellogg:  OWL definition they have they end up being object
   properties - it's not very useful. rdfs.org version is much
   better in that regard, doesn't contain as much ranging
Gregg Kellogg:  Interesting to note that you can't really do this
   modeling round-tripping in Microdata.
Niklas Lindström:  Take a look at the RDFa representation, they
   don't use rdfs:range and rdfs:domain... they use schema:range and
Markus Lanthaler:  This is convenient - but I don't think
   prefixes are one of the "good" features of JSON-LD.
Markus Lanthaler:  I think that w/o prefixes, they don't have
   value for you - and they may clash.
Niklas Lindström:  I think that if we add prefixes - anything
   used as a prefix cannot be used as a default term unless you
   reset it, they will collide.
Niklas Lindström:  Maybe we discussed this before - possibility
   of defining a term w/ a trailing colon to mean "only use this as
   a prefix, never a term".
Markus Lanthaler:  Argument against that is "no microsyntaxes"
Niklas Lindström:  There was a collision which would be
   preventable w/o the term. I defined a thing to use as a prefix
   locally, to make term declarations simpler/more compact.
Niklas Lindström:  The problem was that I did not want the prefix
   to be used later on in general, it ended up somewhere I wanted to
   use it.
Gregg Kellogg:  I do use prefixes a lot, mostly in context
Gregg Kellogg:  Within data, I rely on terms for things I
   explicitly use in my code. When I do things to work w/ more
   generic data, terms/curies don't impact me at all. When working
   with multiple vocabs, better to keep things in a CURIE syntax.
Gregg Kellogg:  Translation from TURTLE is more natural when
   using CURIEs as long as you have access to prefix definitions.
Markus Lanthaler:  In that case, there is no danger for
   forgetting declaration - comes from TURTLE.
Gregg Kellogg:  I was responding to "CURIEs not very useful"
Gregg Kellogg:  I think the only one that needs to be in there is
Manu Sporny:  I think I don't want an initial context in there
   now... you only need "xsd" when you are declaring contexts, you
   don't do that that often.
Manu Sporny:  If we really need to put this feature in, then we
   can do so in the implementations later on.
Gregg Kellogg:  At this point, I don't feel strongly enough to
   object to not having it. My feeling is that it does more good
   than harm... but I'll just do a -0 on a resolution.

PROPOSAL:  Do not support an initial context in JSON-LD 1.0.

Manu Sporny: +1
Niklas Lindström: +0.75
Markus Lanthaler: +1
Gregg Kellogg: -0

RESOLUTION: Do not support an initial context in JSON-LD 1.0.

Topic: ISSUE-150: Use of native types in from/to RDF

Markus Lanthaler:  before we decided to have a "useNativeTypes"
   flag in the fromRDF() algorithm, it automatically assumes we're
   using "xsd:boolean", "xsd:double", etc.
Markus Lanthaler:  Dave Longley proposed to split that flag into
   4 different flags that specify the datatype for each of the
   native types... one flag for integers (maps by default
   toxsd:integer), and so on.
Markus Lanthaler:  The question is whether or not we want to do
Gregg Kellogg:  It's been several months since this came up - but
   adding flags to the RDF serialization makes it a little bit
   crufty and unwieldy. I'd rather come up with a set of practices
   that are just constrained to the RDF serialization. Let's just
   fix these things, if you're transforming an integer fromRDF() to
   RDF, you can represent it fairly easily in TURTLE (as an
Markus Lanthaler:  The question is in the other direction - if
   you're doing fromRDF() and you want to convert "unsigned
   integers" to "integers", then you would want to do this.
Gregg Kellogg:  I don't find the value to represent this as a
   non-negative integer as something else.
Manu Sporny:  My concern is that we're closing off some use case
   by not allowing this to happen.
Gregg Kellogg:  If everything I was doing was using the JSON-LD
   API, then everything I was doing through the API calls...
Gregg Kellogg:  I have an RDF distiller, it hasn't needed any
   special attributes for any particular serialization.
Gregg Kellogg:  I think that my serializes for TURTLE do not make
   use of the native representation for these things by default.
   There might be some tools, but I use string representations for
Markus Lanthaler:  I think the question is if we want to decouple
   JSON-LD from XSD. For example, schema.org uses their own
Manu Sporny:  Two arguments here - complexity vs. flexibility
Gregg Kellogg:  Going from RDF to JSON-LD, that's a different
   issue - we don't make use of native datatypes when transforming
   from RDF to JSON-LD.
Manu Sporny:  My concern is that JavaScript developers expect
   stuff to be converted to native types.
Niklas Lindström:  I see cases for both sides of this argument. I
   mainly agree with Gregg - there is a standard way to say what
   types are in RDF.
Gregg Kellogg:  I might provide some interfaces to provide
   greater fidelity. I'd put a flag in there that says "use native
   datatype representations".
Gregg Kellogg:  We could say that "implementations may provide a
   mechanism that allows these representations to be overridden."
Manu Sporny:  Okay - so we could just keep 'useNativeDatatypes',
   specify how to do the conversion, and add this feature in later.
Markus Lanthaler:

PROPOSAL:  Continue to support 'useNativeDatatypes' in
   .fromRDF(), specifying how the native type conversion happens. Do
   not support options for overriding each native datatype with a
   different value.

Gregg Kellogg: +1
Manu Sporny: +1
Niklas Lindström: +1
Markus Lanthaler: +1

RESOLUTION: Continue to support 'useNativeDatatypes' in
   .fromRDF(), specifying how the native type conversion happens. Do
   not support options for overriding each native datatype with a
   different value.

NOTE: If this feature needs to be added in the future, it can be
   done without creating any backwards-compatability issues.

Topic: ISSUE-151: Context Processing Algorithm Dependency Resolution

Manu Sporny:  The main issue is this, if you define something
   like so in the @context: "schema": "http://schema.org/"
Manu Sporny:  and then later you do this: "name": "schema:name"
Manu Sporny:  Dave Longley is concerned that there are some
   corner cases that could lead to implementations doing two
   different things.
Gregg Kellogg:  I'm concerned that we don't need to spec
   everything down to this level of details. Oddly construed
   examples could be placed into the test suite - that could catch
   these implementation issues.
Gregg Kellogg:  This sounds like an error condition that we want
   to signal, not that everyone can resolve this stuff in the same
Markus Lanthaler:  It's difficult to understand what the
   algorithms are supposed to do w/o implementing them... if you do
   implement them, every implementation is doing exactly the same
Gregg Kellogg:  The risk is that we don't get good analysis from
   people that don't implement it.
Niklas Lindström:  I'm leaning towards gregg's notion - a bit
   more lenience in how to implement an algorithm and ensure that
   test cases cover all the corner cases that might crop up is the
   best approach. We may want a comment in the spec about edge cases
   - state the intent in the algorithm.
Gregg Kellogg:  We could put more detail in an algorithm in the
   appendix would be better. We don't want to subject readers to all
   the corner cases that we're trying to document.
Dave Longley: i think that if you follow the algorithm in the
   spec you should pass the test suite
Markus Lanthaler: I agree
Niklas Lindström: me too...
Niklas Lindström:  I'd probably be a +0 on this, leaving too much
   to interpretation is dangerous.
Dave Longley: one sec
Dave Longley: was talking ... not working.
Dave Longley: software issues over here.
Gregg Kellogg:  I'd like to see some spec text, if it's simple
   then fine... if it's complex, then not okay.
Dave Longley: i was going to say that other specifications , in
   my experience, generally spell out how to write a fully
   conformant processor
Manu Sporny:  We'll ask Dave Longley to send some spec text to
   RDF comments.
Dave Longley: and that, at least when i'm reading a spec to try
   and implement it, i'd prefer it to be that way
Dave Longley: i think one should be able to just read the spec,
   following the language --- and write code as you go, resulting in
   something that passes the test suite.
Dave Longley: if you want to go off on your own and change the
   algorithm or try and improve upon it, that's fine (and
   encouraged, i'd think)
Dave Longley: but, at a minimum, i don't think implementors
   should have to figure out their own algorithms if they don't want

ACTION: Dave Longley to send suggested spec text to modify the algorithm
to public-rdf-comments@w3.org

Dave Longley: i do think it's somewhat annoying to write the spec
   language :), however, i don't think that that is a good reason to
   not do it
Dave Longley: yes
Gregg Kellogg:  To respond to Dave Longley, there has been some
   evidence that spec algorithm complexity has prevented folks from
   getting into the specs a bit further.
Gregg Kellogg:  I'd like them to be able to participate w/o being
   buried in details.
Gregg Kellogg:  I have an action to annotate RDF algorithms to
   make it more clear about what's going on.
Dave Longley: prevented implementors from getting into the specs
   or users of the APIs?
Dave Longley: because i think that might be a presentation issue
   if it's just the users of the APIs
Dave Longley: i can see it being frustrating, when writing an
   implementation, that you don't pass the test suite ...
Dave Longley: and the spec document doesn't spell out why
Dave Longley: i don't think that's something that an implementor
   wants to spend their time on
Dave Longley: i guess i'm ok with the spec spelling it out in
   another section ... but i don't know what that really buys us
Dave Longley: and that it might just make it more complex to
   implement (having to jump around the spec to find caveats, etc)
Manu Sporny:  I think there are good arguments from both sides -
   algorithms too high-level, interoperability problems. algorithms
   too low-level, hard for people to contribute.
Gregg Kellogg:  I do agree that test cases shouldn't come out of
   nothing... there should be spec text backing them up. Having test
   cases should protect against some of these corner cases.
   Balancing act.
Niklas Lindström:  I wonder if there is room for internal
   re-factoring? For example, in RDFa algorithm there are references
   from high-level text (in the algorithm) to low-level text in the
   spec (elsewhere in the spec).
Dave Longley: if we simply spell out that prefix dependencies
   have to be resolved and cycles must generate errors -- and then
   we craft test cases that expose the corner cases, then i can
   compromise and agree to that.
Manu Sporny:  Dave, yeah, I think that's where we are right now.
Dave Longley: i worry that we don't currently have those corner
   cases -- and that it's difficult to generate them, however.
   (doesn't mean they won't show up in the real world)
Gregg Kellogg:  I think the term selection algorithm is an
   example of this right now, it's very complicated. We could
   simplify it, but the result might not be that desirable when
   faced with complicated input.
Niklas Lindström:  I think the important part is that when the
   data is very complex, you want to make each part distinct.
   created/date and created/datetime are important to differentiate
   in complex data.
Gregg Kellogg:  I'm concerned about the geometric complexity that
   is added when we support all of these corner cases, we may just
   want to punt on some of this stuff.
Dave Longley: another thing to be concerned with is that if we
   don't spec some of this stuff out, then the algorithms that are
   in the spec might be fundamentally incompatible with a working
Dave Longley: as in, if we have a very detailed way of looping
   over keys in a context during processing ... but in order to
   properly resolve dependencies we must not loop (not saying this
   is the case)
Dave Longley: then we've spec'd out something that doesn't
   actually work when you take into consideration the other parts of
   the spec that aren't spec'd in detail.
Manu Sporny:  Okay, that's the call for today, we'll meet again
   next week.

-- manu

Manu Sporny (skype: msporny, twitter: manusporny)
President/CEO - Digital Bazaar, Inc.
blog: Which is better - RDFa Lite or Microdata?
Received on Tuesday, 14 August 2012 17:19:02 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:18:34 UTC