W3C home > Mailing lists > Public > public-linked-json@w3.org > April 2013

JSON-LD Telecon Minutes for 2013-04-02

From: Manu Sporny <msporny@digitalbazaar.com>
Date: Thu, 04 Apr 2013 10:07:58 -0400
Message-ID: <515D893E.5040305@digitalbazaar.com>
To: Linked JSON <public-linked-json@w3.org>
CC: RDF WG <public-rdf-wg@w3.org>
Thanks to Dave Longley for scribing! The minutes from Tuesday's telecon
are now available.


Full text of the discussion follows including a link to the audio

JSON-LD Community Group Telecon Minutes for 2013-04-02

   1. Web Payments Launch (PaySwarm / Meritora)
   2. ISSUE-235: Let @vocab take precedence over compact IRIs in
   3. Rename '@type': '@vocab' to '@type': '@context'
   4. ISSUE-224: Sandro Hawke's Feedback
   5. ISSUE-234: Sandro Hawke's JSON-LD API spec review
   6. ISSUE-236: Zhe Wu's JSON-LD API spec review
   1. When compacting IRIs @vocab should take precedence over
      Compact IRIs. This reverses the previous order of precedence.
   2. Specify what canonical lexical form is for xsd:integer and
      xsd:double by referencing the XML Schema 1.1 Datatypes
      specification. When processors are generating output, they are
      required to use this form.
   Manu Sporny
   Dave Longley
   Dave Longley, Manu Sporny, Niklas Lindström, Gregg Kellogg,
   Markus Lanthaler, Paul Kuykendall, David I. Lehn

Dave Longley is scribing.
Manu Sporny:  on the agenda today we have going over sandro's
   syntax feedback, the @vocab precedence issue, sandro's api spec
   review, zhu's review, roundtripping concerns
Manu Sporny:  any changes?
Niklas Lindström:  let's do the @vocab issue first, not sure if
   it's feasible to consider, we cannot change the syntax i believe,
   but we say that a "term has a type of @vocab" (because @id
   doesn't support terms as values), i was wondering if we could
   consider using type: "@context" instead
Manu Sporny:  we should discuss it

Topic: Web Payments Launch (PaySwarm / Meritora)

Manu Sporny: http://blog.meritora.com/launch/
Manu Sporny:  our company launched Meritora, which is built on
   top of JSON-LD
Manu Sporny:  the whole protocol is powered by JSON-LD, real
   money is flowing through Meritora, so hopefully JSON-LD is
   well-designed where it matters!
Niklas Lindström:  btw, i'm now an employee of the national
   library of sweden so i'm representing them, not just myself
Manu Sporny:  you'll want to change your status in the CG, btw.

Topic: ISSUE-235: Let @vocab take precedence over compact IRIs in compaction

Manu Sporny: https://github.com/json-ld/json-ld.org/issues/235
Niklas Lindström:  basically, the fact that, when compacting,
   even if a @vocab is defined in the context, if there is a prefix
   defined with the same value, the prefix will take precedence over
Niklas Lindström:  if you, for instance, inherit a context with
   that prefix defined, even if you set @vocab, you won't get vocab
   terms when compacting
Niklas Lindström:  i think that other serializations work
   differently (@vocab having precedence instead)
Niklas Lindström:  i think that compaction should make things as
   small as possible, using @vocab over CURIEs does that
Niklas Lindström:  i think it would be sometimes unexpected if
   the behavior is to give precedence to CURIEs over @vocab, if you
   have a lot of prefixes defined w/various dependencies you might
   get undesirable compaction results
Niklas Lindström:  if the precedence order went the other way, it
   wouldn't be so bad
Error: (IRC nickname not recognized)[Tue 10:10]	<trj>	Hi, does
   anyone know of any examples of JSON-LD being used with IoT
   sensors? Perhaps with http://purl.oclc.org/NET/ssnx/ssn
Dave Longley:  I tried to break out some of the reasons in the
   issue tracker, the issue that is going to be the strangest is
   when you have prefixes in a previous chained context. [scribe
   assist by Manu Sporny]
Dave Longley:  That might be an issue - @vocab trumps everything.
   [scribe assist by Manu Sporny]
Dave Longley:  So, this is a question of what we think the
   default case is... if you are using @vocab to catch terms that
   aren't there, then it's going to override too many of the
   prefixes that are defined. [scribe assist by Manu Sporny]
Dave Longley:  If you want CURIEs in the output, you probably
   don't use @vocab. [scribe assist by Manu Sporny]
Dave Longley:  If you want to use @vocab in compaction, and you
   don't want CURIEs in the output, it's been suggested by niklas,
   and I think he's right, that you're much more likely to have used
   prefixes in the context, not the data. [scribe assist by Manu
Dave Longley:  It's going to be a lot more complicated to get the
   output you want, you have to undefine all the prefixes, that's
   not good for developers. [scribe assist by Manu Sporny]
Dave Longley:  As far as I can tell, that is going to be the more
   common case, and we should probably support it (unless there is a
   reason that someone can see why we should prefer CUREs over
   @vocab) [scribe assist by Manu Sporny]
Gregg Kellogg:  i can easily imagine a number of cases where you
   get an initial context that does define schema as a prefix but
   the document does say @vocab is schema
Gregg Kellogg:  i can see how you'd end up in precisely this case
   here so preferring @vocab makes sense
Gregg Kellogg:  if a CURIE is more like a term then we have to
   continue to use that (it's defined explicitly on the LHS w/types
   and containers on the right)
Gregg Kellogg:  that's more like using a term in that case
Dave Longley:  i agree, it's a term at that point
Niklas Lindström:  yes, i don't think this change would affect
Manu Sporny:  do you disagree with changing this?
Markus Lanthaler:  no, i tried implementing it and it was easy
Markus Lanthaler:  as long as it's just for compaction
Niklas Lindström:  if someone changed expansion that would be bad

PROPOSAL: When compacting IRIs @vocab should take precedence over
   Compact IRIs. This reverses the previous order of precedence.

Niklas Lindström: +1
Dave Longley: +1
Manu Sporny: +1
Gregg Kellogg: +1
Paul Kuykendall: +1
Markus Lanthaler: +1

RESOLUTION: When compacting IRIs @vocab should take precedence
   over Compact IRIs. This reverses the previous order of

Topic: Rename '@type': '@vocab' to '@type': '@context'

Niklas Lindström: .. "category": "Film"
Niklas Lindström: .. "category": {"@type": "@vocab"}
Niklas Lindström:  right now, if the content in your document,
   you want to have a syntax expression like what's in IRC, and the
   value there is a term which is defined either, you want it to be
   looked up in the context, either you will find a defined term for
   it and expand it, or you want to resolve it against @vocab
Niklas Lindström: .. "category": {"@type": "@context"}
Niklas Lindström:  a recent addition was to use "@type": "@vocab"
Niklas Lindström:  given that it only resolves against @vocab in
   one of the cases (it can use terms otherwise)
Manu Sporny:  i don't think that @vocab is idea, but i also don't
   think @context is better
Niklas Lindström: .. "category": {"@type": "@symbol"}
Manu Sporny:  but adding another keyword fights against our
   desire to keep the number of keywords limited
Gregg Kellogg:  i think they are both unfortunate, but it's one
   minute to midnight w/the spec, so we need to raise the bar with
   making changes
Dave Longley:  I think part of the confusion is that the meaning
   of @vocab is being overloaded. I think that looking at @context
   would be more confusing. [scribe assist by Manu Sporny]
Dave Longley:  One way to look at @type: @vocab is to think "Oh,
   the type of the term is also a vocabulary term" [scribe assist by
   Manu Sporny]
Dave Longley:  The type of data that goes with this term is a
   'vocabulary thing' - which means anything in the @context.
   [scribe assist by Manu Sporny]
Dave Longley:  if we had @context there, I'd be more confused - I
   would expect a context to be there. [scribe assist by Manu
Markus Lanthaler:  i agree with dave on this
Niklas Lindström:  i thought about this, if we see @context there
   we get confused
Niklas Lindström:  when i did the explanation to myself, i
   reasoned like dave, like dave said, the value is looked up in the
   @context, and i said, in my head, @context more than @vocab, so i
   thought maybe it would better explain it
Dave Longley:  We're either going to overload what @type means,
   or we're going to overload the value of @type to get the
   explanation right. We already overload @type for datatype and
   regular type. [scribe assist by Manu Sporny]
Niklas Lindström:  Yeah, maybe we need to document this
   clearly... [scribe assist by Manu Sporny]
Dave Longley:  Yes, explaining that @vocab has a different
   meaning is probably easier to explain than putting @context there
   and explain it. [scribe assist by Manu Sporny]
Paul Kuykendall:  i just wanted to chime in, i'm more of an
   outsider so maybe i can help
Paul Kuykendall:  i do think that @vocab is easier to understand,
   just from listening to the discussions
Paul Kuykendall:  using JSON-LD you're already used to a little
   bit of overloading
Niklas Lindström:  ok, that's what i needed to hear, i withdraw
   any proposal here
Manu Sporny:  yes that was helpful, thank you, paul
Manu Sporny:  ok, then no change, let's just close the issue

Topic: ISSUE-224: Sandro Hawke's Feedback

Manu Sporny: https://github.com/json-ld/json-ld.org/issues/224
Manu Sporny:  sandro had even more feedback for markus, can we
   get through this quickly, is there much to be dealt with, it
   seemed like you mostly had it under control
Markus Lanthaler:  not really much to discuss, the most critical
   thing is whether we normatively reference RDF-CONCEPTS and
Markus Lanthaler:  we'll be discussing that tomorrow in the RDF
   WG too
Markus Lanthaler:  sandro is happy with the other changes i made
Markus Lanthaler:  the only thing left is data roundtripping
Manu Sporny:  ok, that's good, let's focus on just the syntax
Manu Sporny:  it looks like it was mostly editorial
Markus Lanthaler:  i'll double check that... yeah, it was just
   mostly editorial
Markus Lanthaler:  i'll send an email to sandro later to ensure
   everything has been addressed
Gregg Kellogg:  there are some check boxes to check in the issue
Markus Lanthaler:  those are some things that i think we
   shouldn't change, which i explained in emails, i just need
   sandro's feedback on us not changing it.
Markus Lanthaler:  i think there's no change required for those
Paul Kuykendall:  the only ones that i see that might be of
   interest is where the overloading of @vocab is mentioned as
Markus Lanthaler:  yeah, that's one we already discussed in the
Paul Kuykendall:  yeah, just want to make sure we're in agreement
Manu Sporny:  yes, i think we're sure about the keywords and the
   meaning we overload and we made tradeoffs to do that
Manu Sporny:  and it isn't perfect but we aren't going to change
   it without some really good feedback/lots of people making big
   mistakes with it for us to change it

Topic: ISSUE-234: Sandro Hawke's JSON-LD API spec review

Manu Sporny: https://github.com/json-ld/json-ld.org/issues/234
Markus Lanthaler:  mostly editorial, sandro didn't review the
   algorithms himself, but had a number of other comments, which i
   already addressed, and sandro agrees we can close the issue
Markus Lanthaler:  the only thing remaining is data roundtripping
   which i split out into a separate issue so we can better focus

Topic: ISSUE-236: Zhe Wu's JSON-LD API spec review

Manu Sporny: https://github.com/json-ld/json-ld.org/issues/236
Manu Sporny:  zhu basically didn't review the algorithms either
   because he found them too complicated to read through
Manu Sporny:  he asked us to reorganize the document fairly
Manu Sporny:  but i think it's a bad idea, but we'll see what the
   group thinks
Manu Sporny:  i think it would take a lot of time but not buy us
   much, we attempted to do what he wanted earlier
Manu Sporny:  we are where we are now with the spec because that
   previous attempt didn't work as well, we made changes that we
   thought made it easier to read the spec today than that
   alternative approach
Markus Lanthaler:  he didn't say that the algorithms were too
   complex, that they were instead too long
Manu Sporny:  well, we're doing that for a reason, we want to be
   very explicit about what happens
Paul Kuykendall:  the comments i've gotten from my colleague who
   has been implementing the various algorithms, starting before the
   algorithm split, he said the new ones are much easier to follow
   and easier to understand what's going on
Gregg Kellogg:  there's a certain stylistic issue with what you
   see in programming (eg: taking large algorithms and breaking them
   into smaller sections so that each sub step fits on a screen)
Gregg Kellogg:  i took some of zhu's comments in that light, and
   some of the algorithms would perhaps be more useful if they were
   broken out into smaller subsections
Gregg Kellogg:  i take markus' point that it might be difficult
   with going back and forth (jumping around) to figure out what's
   going on then
Gregg Kellogg:  i'll use the same razor i did before though,
   we're getting close to the end, i don't think we need to change
   this, it's stylistic change, that is left to the purview of the
   editors, we should leave it alone
Manu Sporny:  i agree, the rest seemed like editorial changes, is
   that true?
Markus Lanthaler:  yes, i got back to him, he didn't get back to
   me yet to let me know if the changes were enough or if he really
   really wanted the algorithms split up
Paul Kuykendall:  i do agree that shorter algorithms are easier
   to read, but you don't want to lose the context of where you are
   and how the algorithms work, i think we struck a good balance
   where we mostly broke the algorithms into sub parts where they
   could be reused, and implementations might break them up more,
   but that's an implementation detail, not for the algorithms
Markus Lanthaler:  he also raised the point that he would prefer
   numbers for errors not strings
Dave Longley:  I don't think we need to convert the string values
   to numbers, we're not programming on a Commodore 64 - we have
   much more modern programming environments available to us [scribe
   assist by Manu Sporny]
Paul Kuykendall:  string processing does suck in some languages,
   but what we've done internally is use numbers
Gregg Kellogg:  i don't think the purpose of the spec is to
   explain how to implement this in every way
Manu Sporny: ISSUE-237: Sandro's Data Round Tripping Concerns
Manu Sporny: https://github.com/json-ld/json-ld.org/issues/237
Markus Lanthaler:  he doesn't understand most of this section
Markus Lanthaler:  if we keep this section, why don't we only
   convert canonical lexical form values to native types, leaving
   the rest alone
Markus Lanthaler:  the other thing we didn't discuss here, is the
   potential precision loss you experience when converting to
Markus Lanthaler:  so i think the question here is whether we
   keep the requirement that implementations must use canonical
   lexical form
Markus Lanthaler:  and if we do keep it, if we only convert to
   native types when the values are in canonical lexical form
Manu Sporny:  so the reason that we say this sort of thing in the
   spec, is because if we don't say anything about it, people will
   be surprised when their numbers start having rounding errors ...
   they take their space probe and crash it into mars because they
   weren't expecting the behavior
Manu Sporny:  we also want to be very clear about what they do
   with those numbers to round trip
Dave Longley:  testing becomes far more difficult if we dont'
   specify this
Markus Lanthaler:  well, rounding errors dont' have to do with
   canonical lexical form
Dave Longley:  but testing does
Manu Sporny:  it seems like an interoperability issue if we don't
   specify this
Manu Sporny:  if we specify it, it's very clear how to
Gregg Kellogg:  i guess the alternative would be to defer to XSD
   where possible for the definition of canonical lexical form
Markus Lanthaler: sandro's response to my argumente "we do it to
   simplify testing" was "I don't think simplifying testing merits a
   MUST..... Or, if it does, then say that, instead of saying it's
   because of round-tripping...."
Gregg Kellogg:  or at least to say "this is the same as" or "as
   defined by"
Markus Lanthaler:  i think he's saying it might be enough to say
   something is a certain type (xsd:integer/xsd:double)
Markus Lanthaler:  and not care about the lexical value
Dave Longley:  There was a specification where something was
   specified in canonical lexical form, we changed it from it
   originally was a lowercase 'e', to what it is now, based on some
   specification. Maybe we should've been referencing that
   specification. [scribe assist by Manu Sporny]
Markus Lanthaler:  there are differences between what
   languages/JSON serializers and 'e'/'E' for canonical
Gregg Kellogg:  i do recall that we eliminated some rounding
   issues w/ruby w/decimal precision length
Markus Lanthaler:  i think that string format we used previously
   also ensured that something was 64-bit for JSON which doesn't
   define the value space
Markus Lanthaler: That was the issue about precision
Manu Sporny:  we should reference the XSD spec to be very clear
   about this or reference which spec we based this off of
(missed scribing a bit here)
Paul Kuykendall:  i'm looking at the xml-schema second edition
   under data types and it does define quite a bit there
Markus Lanthaler: Here's our current data round tripping section:
Paul Kuykendall:  someone should look there
Manu Sporny:  the spec was xml-schema part 2 data types
Manu Sporny:  let's make sure it matches what we have in the spec
   and let's just refer to the spec
Manu Sporny:  instead of paraphrasing
Gregg Kellogg: [XMLSCHEMA11-2]
Manu Sporny:  but we still make very clear what the canonical
   lexical form is
Gregg Kellogg:  yes, RDF-CONCEPTS references this and we should
   sync up
Markus Lanthaler:  do we require canonical lexical form then?
Gregg Kellogg:  we shouldn't restrict the input
Gregg Kellogg:  but we should be able to transform into that form
   to allow for lexical comparison
Gregg Kellogg:  in my serializers i have an option for
   canonicalization, but if that were always supplied i would fail
   some specs
Gregg Kellogg:  JSON of course has its own restrictions when
   dealing with numbres because it's a native restriction, not a
   strictly lexical representation
Gregg Kellogg:  you can't ensure that the input looks exactly the
   same as the output
Paul Kuykendall:  are we talking about maintaining the mapping
   between JSON native types and xsd types?
Dave Longley:  To be clear, we're not talking about changing the
   mapping - for a number in JSON, it's either going to be an
   xsd:integer or an xsd:double. We should tell implementations what
   these lexical forms should look like if you convert the number to
   a string. [scribe assist by Manu Sporny]
Markus Lanthaler: all these are valid 1.4 = 14E-1 = 14e-1.. but
   there's only one canonical lexical form: 1.4E0
Paul Kuykendall:  i just wanted to make clear that we weren't
   changing the mappings here, it sounds good to me that we're just
   talking about mapping things to an external spec
Manu Sporny:  it's really important that we have the flag in the
   algorithm for converting/not converting to native types
Markus Lanthaler:  we aren't defining an API so we don't need to
   define the flag because there's no operation there
Dave Longley:  All we have to do is revert a change we made - if
   a flag is set, change to native types, if it is not set, don't
   change to native types. [scribe assist by Manu Sporny]
(more missed scribing)
Manu Sporny:  so i think we have one solid proposal then
Manu Sporny:  to put the flag for convert to/from native types
   when doing RDF conversion back into the algorithm
Markus Lanthaler:  and deciding if we should require canonical
   lexical form
Niklas Lindström:  i think the rdf type flag is important because
   it's just a relation like anything else
Niklas Lindström:  i'm a bit wary about removing that flag
Gregg Kellogg:  as i recall the reason we had it was so that we
   could do other mapping during compaction/expansion
Gregg Kellogg:  there's nothing to prevent you from navigating
Gregg Kellogg:
Discussion about whether the exact lexical form for literals in
   RDF (converted from JSON-LD) must be specified at all)
Gregg Kellogg:

Markus Lanthaler:
Manu Sporny:  the only way to compare literals in the abstract
   model is to jump into lexical space to do the comparison, if we
   don't specify the lexical form for this data, you can't do a
   comparision, we don't have interoperability
Markus Lanthaler:  i don't think we need to be specifying this,
   it should be in an RDF spec
Gregg Kellogg:  we're dealing with native representations so we
   are losing the lexical form, so we need to be able to convert
Discussion about deferring the current issue to the RDF WG
Manu Sporny:  if we take a position in this group we can take
   that position to the group, instead of not taking one and making
   it an open-ended discussion
Markus Lanthaler:  the whole reason we're having this discussion
   is because sandro, from the RDF WG, has an issue with it

PROPOSAL: Specify what canonical lexical form is for xsd:integer
   and xsd:double by referencing the XML Schema 1.1 Datatypes
   specification. When processors are generating output, they are
   required to use this form.

Manu Sporny: +1
Paul Kuykendall: +1
Dave Longley: +1
Gregg Kellogg: +1
Markus Lanthaler: +0
Niklas Lindström: +1
David I. Lehn: +0

RESOLUTION: Specify what canonical lexical form is for
   xsd:integer and xsd:double by referencing the XML Schema 1.1
   Datatypes specification. When processors are generating output,
   they are required to use this form.

Gregg Kellogg:  the last issue we can talk about is changing the
   RDF-CONCEPTS reference to be normative
Gregg Kellogg:  David Wood said that if we don't normatively
   reference our own documents (this is a RDF WG doc) that's a
Manu Sporny:  why don't we reference other W3C documents
Manu Sporny:  normatively
Gregg Kellogg:  JSON-LD is an RDF serialization format, so every
   other RDF serialization format has a normative reference to
Gregg Kellogg:  it's not clear that we're being an RDF syntax if
   we don't normatively reference RDF-CONCEPTS

-- manu

Manu Sporny (skype: msporny, twitter: manusporny, G+: +Manu Sporny)
Founder/CEO - Digital Bazaar, Inc.
blog: Meritora - Web payments commercial launch
Received on Thursday, 4 April 2013 14:08:24 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:18:37 UTC