Re: JSON-LD & nested structure

good afternoon;

> On 2016-08-19, at 19:36, Gregg Kellogg <gregg@greggkellogg.net> wrote:
> 
>> On Aug 17, 2016, at 5:20 PM, james anderson <james@dydra.com <mailto:james@dydra.com>> wrote:
>> 
>>> 
>>> On 2016-08-18, at 01:52, Gregg Kellogg <gregg@greggkellogg.net <mailto:gregg@greggkellogg.net>> wrote:
>>> 
>>>> On Aug 17, 2016, at 4:33 PM, james anderson <james@dydra.com <mailto:james@dydra.com>> wrote:
>>>> 
>>>> good morning;
>>>> 
>>>>> On 2016-08-17, at 22:43, Gregg Kellogg <gregg@greggkellogg.net <mailto:gregg@greggkellogg.net>> wrote:
>>>>> 
>>>>> […]
>>>>>> 
>>>>>> I don't know if the Ruby implementation supports these features yet.
>>>>> 
>>>>> I believe I support all of the embedding options that Dave’s does.
>>>>> 
>>>>> BTW, on my short-term list is to try to update the Framing spec based on this common behavior.
>>>> 
>>>> 
>>>> if you should get to that, please distinguish between behaviour which concerns or presumes a json data model and that which concerns just the encoding itself.
>>>> as the document stands, there are aspects which one ignores - with bad conscience, but to advantage, when one has no json data model and there are others which, when they are implemented because they are stated so explicitly, but without a model, are just not a good idea.
>>> 
>>> James, I imagine the algorithm to be defined in a manner similar to both the existing, and other JSON-LD algorithms. As framing always involves expansion, the structure of both the frame, and the source document is well defined in terms of expanded JSON-LD.
>>> 
>>> Could you provide an example of where the existing text confuses the JSON data model and the encoding? If this is confusing in existing algorithms, can you suggest how that wording might be improved?
>> 
>> while the potential for confusion among an abstract model, a concrete model, and the concrete encoding applies to other issues with the documents, with respect to this issue, the concern is that the framing document presumes a concrete model.
>> in detail, that it is a “json” model is incidental.
> 
> Algorithms are described as working on “language-native” data structures, not JSON.
> All algorithms described in this section are intended to operate on language-native data structures. That is, the serialization to a text-based JSON document isn't required as input or output to any of these algorithms and language-native data structures must be used where applicable.
> 
> A processor parses JSON into a local data structure and the algorithms work across that deserialized data.

the cited passage reiterates the issue quite directly.
the “json-ld framing 1.0” document  describes itself, within the first “page”, as a “detailed specification for a serialization”, as concerning the “layout of a [tree]” which is the “end result” of a mapping from a graph and as concerning “a [] document which is a representation of a directed graph”.

as indicated by the cited passed, the algorithm descriptions would appear to be transliterations of implementations based on particular concrete “language-native data structures” which represent a model for the directed graph.
some of the features of the algorithms appear to be neither essential to nature of either directed graphs or their serializations nor to offer advantage to either the generation or the consumption of such serializations.
two aspects are noted in my earlier message.

in this situation, if one considers to revise the document, there could be advantage to a clearer separation, between those aspects of the algorithms which a necessary to the purpose which is stated at the document's outset and those which are incidental to the initial implementations.

if one conclusion is, the specification is intended to be for the just exactly methods to transform javascript data models, that would be ok, but it should then be more explicit as to its scope.

> 
>> the examples which spring to mind:
>> 
>> - if i have understood the document correctly, it stipulates that as part of the process the data be ordered by id.
>> there are situations in which it is possible to arrange for that without materializing a model, but that is not always the case.
>> in other situations, this requirement make it difficult to stream a response encoded as json-ld.
> 
> Indeed, most JSON-LD algorithms require that all data be deserialized and properly ordered. Most notably, this is a part of Expansion, Compaction, Node Map Generation, To RDF and Context processing. Materializing is certainly required for Compaction and Framing.

this acts to the detriment of any use case which intends to serialize from anything other than a “native data structure”, such as a process which emits a json-ld document on-the-fly.

> 
> In practice, ordering can be relaxed for Expansion when used for RDF generation.

this is an example of the distinction which is described above.
the specification would be improved if these practical options were to be described in more explicit terms.

> 
> Ordering, and other considerations for streaming profiles, isn’t something we can address in the Framing document, as it depends on the underlying 1.0 algorithms, which do manifest and order. Some consideration for streaming profiles for JSON-LD might be good to address in a next version, and we’re collecting feature requests at http://github.com/json-ld/json-ld.org/issues <http://github.com/json-ld/json-ld.org/issues>. However, removing ordering requirements for Compaction and Node Map Generation is likely not feasible if textual reproducibility is needed (which it is now).

there is no reason that the one should necessarily undermine the other.
if json-ld serializations must be able to fulfil a constraint on “textual reproducibility”, it is not clear, why that would not be captured in some explicit profile parameter - just as embedding is, and left to the the larger implementation to satisfy, rather then entraining it as an implementation feature for the generation process.


> 
>> - the @link mode:
>>> One more thing -- I forgot there's also an "@link" embed option, which
>>> will cause the output to use direct object references (in-memory links)
>>> when embedding. This kind of output can't necessarily be serialized due
>>> to potential circular references, but it is often useful for applications.
>> 
>> - the @last mode presumes there is some state within which a reference is known to be last. @first would be as unfortunate. @never is the only one which makes sense for streamed data.
>> 
> 
> @last is well-defined due to ordering requirements.

yes, while it is not clear to me how the ordering requirement is, in itself, sufficient to determine the “last” reference, it is an aspect of the "state within which a reference is known to be last”, which is part of the issue.

best regards, from berlin,
---
james anderson | james@dydra.com | http://dydra.com

Received on Thursday, 25 August 2016 15:28:11 UTC