Re: Another alternate version of the JSON-LD API spec

I still don't think I'm too comfortable using an Inverse Context, as it seems like we're embedding too much implementation-specific detail into the algorithm. However, if two different implementors think it is important, I won't stand in the way.

In general, I think this re-write is a big improvement, setting about motivations for the algorithms, and a high-level description of the algorithm. As I note below, I think the wording is a bit folksy, and should use more spec-like language in describing what is to be done; but this is not critical for this version of the document.

The RDF algorithms indicate that more work is necessary, but I see that issue 125, requiring that to/from RDF algorithms be defined in terms of standard RDF [1] has been closed, so it might be okay after all. The issue "This algorithm needs some clarification on its details", itself needs some clarification, as I don't see what's not clear. However, I think we can add to 5.18 some conversion examples, that go through some of the cases.

It will be some time before I update my own implementation based on these algorithms, as it represents a substantial amount of work, and I currently pass almost all of the expansion/compaction tests as it exists, so it's difficult to justify the fairly large amount of work necessary to change the implementation.

Specific comments:

Section uses "we" quite a bit, which I don't think is quite right. I'd rather see it written using "After searching is complete ...", rather than "When we have finished searching ...". It should also probably have an Algorithm section, even if it is brief. This is also a good place to note that relative IRIs are resolved against the enclosing document, so that remote contexts are opened property relative to either the referencing JSON-LD document, or to another remote context.

Rather than calling sub-sections "Problem", we should consider something like "Purpose".

Ii like having a "General Solution" sub-section, to give high-level overview of the basic operation of the algorithm is. Both of these sections should be marked as "informative".

I find that saying "xxx equals null", as this is often not how it provided in many contexts, or equivalence has a different meaning. Also, there is really only one "null" value. I prefer to say "xxx is null". I think this wording is better for other cases, such as "If iri does not equal @type" => "If iri is not @type".

In the Create Term Definition Subalgorithm algorithm, what is the purpose of the special "@preserve" keyword? It is not defined or used anywhere else AFAICT.

Is it reasonable for a property generator to have @type as one of the @id values? Perhaps not (5.4.3 step 12.2.2.2).

Otherwise .. otherwise is awkward, we can always smooth this out post LC.

Normative statements in the algorithm may duplicate such statements in the Syntax grammar, for example "Otherwise id must be a string" (5.4.3 step 12.3).

We should probably add something to Context Expansion to error if "key" is a keyword (section 5.3.3 step 3.6) because from the grammar "A term MUST NOT equal any of the JSON-LD keywords".

Gregg Kellogg
gregg@greggkellogg.net

[1] https://github.com/json-ld/json-ld.org/issues/125

On Feb 4, 2013, at 9:55 AM, Dave Longley <dlongley@digitalbazaar.com> wrote:

> I've added a rough draft of the alternate version of the JSON-LD API spec I've been working on to the json-ld.org repository on github.
> 
> It can be viewed here: http://json-ld.org/spec/latest/json-ld-api/alternate2.html
> 
> The main purpose of this spec is to provide more explanation for how each algorithm works, integrate some of Markus' ideas (in     particular, inverse context) with the original algorithms, and make appropriate changes to reflect an actual implementation of the algorithms (the playground's processor uses the algorithms in this version).
> 
> This spec version does not include some of the issues that I believe are still being debated, including the issue related to relative IRIs and terms appearing in @ids.
> 
> In addition to changes to the algorithms, the introduction was rewritten, some sections were reordered, and some of the examples were shortened or changed slightly for simplicity. I think there may still be room for improvement with simplifying or cleaning up the examples. Some of the now unused terminology was also removed. I think we may want to revisit some of the terminology like "active <variable>" -- perhaps not renaming these things, but instead considering them simply variable names instead of part of a heavy processor state that isn't needed to implement the spec or describe the algorithms.
> 
> While working on the spec, I noticed a number of inconsistencies that we should resolve. I'm sure many of these simply arose out of different people working on the spec. These inconsistencies include but are not limited to: variable naming (camel-case or multiple words?), the use of "an" or "a" preceding keywords, the use of "the" or "a/an" prior to variable names in algorithm prose, various capitalization differences for headings, etc., the use of very specific data-structure information within algorithms (see the new inverse context algorithm for example) vs. loose instructions to just "store X in Y"or "retrieve X from Y", whether or not the new "Problem" and "General Solution" subsections should appear with every algorithm, what is considered a "subalgorithm" and what isn't (perhaps there's a better way to help the reader understand how these algorithms fit together), the use of "equals null/true/false" or "is null/true/false", the use of "if foo is not a key" vs. "if foo does not equal a key in".
> 
> I also noticed that we do a lot of the same semi-simple operations throughout the algorithms, such as normalizing values to arrays (creating an array with a single item in it if the item isn't already an array). There may be a nicer technique for describing this (or linking to it) than either being overly verbose or too vague.
> 
> I didn't make changes to the flattening, node map generation, or convert to RDF algorithms.
> 
> Hopefully this new text can be a basis for moving the spec forward.
> 
> -dave
> 
> -- 
> Dave Longley
> CTO
> Digital Bazaar, Inc.
> 
> http://digitalbazaar.com

Received on Monday, 4 February 2013 20:48:51 UTC