- From: Dave Longley <dlongley@digitalbazaar.com>
- Date: Mon, 04 Feb 2013 12:55:48 -0500
- To: public-linked-json@w3.org
- Message-ID: <510FF624.9000804@digitalbazaar.com>
I've added a rough draft of the alternate version of the JSON-LD API spec I've been working on to the json-ld.org repository on github. It can be viewed here: http://json-ld.org/spec/latest/json-ld-api/alternate2.html The main purpose of this spec is to provide more explanation for how each algorithm works, integrate some of Markus' ideas (in particular, inverse context) with the original algorithms, and make appropriate changes to reflect an actual implementation of the algorithms (the playground's processor uses the algorithms in this version). This spec version does not include some of the issues that I believe are still being debated, including the issue related to relative IRIs and terms appearing in @ids. In addition to changes to the algorithms, the introduction was rewritten, some sections were reordered, and some of the examples were shortened or changed slightly for simplicity. I think there may still be room for improvement with simplifying or cleaning up the examples. Some of the now unused terminology was also removed. I think we may want to revisit some of the terminology like "active <variable>" -- perhaps not renaming these things, but instead considering them simply variable names instead of part of a heavy processor state that isn't needed to implement the spec or describe the algorithms. While working on the spec, I noticed a number of inconsistencies that we should resolve. I'm sure many of these simply arose out of different people working on the spec. These inconsistencies include but are not limited to: variable naming (camel-case or multiple words?), the use of "an" or "a" preceding keywords, the use of "the" or "a/an" prior to variable names in algorithm prose, various capitalization differences for headings, etc., the use of very specific data-structure information within algorithms (see the new inverse context algorithm for example) vs. loose instructions to just "store X in Y"or "retrieve X from Y", whether or not the new "Problem" and "General Solution" subsections should appear with every algorithm, what is considered a "subalgorithm" and what isn't (perhaps there's a better way to help the reader understand how these algorithms fit together), the use of "equals null/true/false" or "is null/true/false", the use of "if foo is not a key" vs. "if foo does not equal a key in". I also noticed that we do a lot of the same semi-simple operations throughout the algorithms, such as normalizing values to arrays (creating an array with a single item in it if the item isn't already an array). There may be a nicer technique for describing this (or linking to it) than either being overly verbose or too vague. I didn't make changes to the flattening, node map generation, or convert to RDF algorithms. Hopefully this new text can be a basis for moving the spec forward. -dave -- Dave Longley CTO Digital Bazaar, Inc. http://digitalbazaar.com
Received on Monday, 4 February 2013 17:55:41 UTC