Re: Spec review/pull request

On 08/30/2011 12:49 PM, Markus Lanthaler wrote:
> In section 3.11 Framing: is the output always automatically compacted? If
> so, we should mention that.

The output is always compacted according to the context given in the 
frame. The entire framing section is a little wanting. We need to better 
explain framing's purpose and necessity in general as well as its 
specifics. Some of that might come from the email I sent out about how 
it works last week [1].

> I've changed the API interface in section 5. The Application Programming
> Interface so that all methods have a nullable context parameter. In
> compact() it wasn't nullable before but nothing prevents a user to have the
> context specified inline (in input).

Well, I think the json-ld.org playground might be a bit misleading on 
how compaction works in terms of where the context comes from. Any 
context found in the input is stripped (the input is expanded) before 
the external context that is provided is applied to the input. The 
purpose of making a compact API call is to apply a different context 
from the one that was given for the input -- and to potentially shorten 
object IRIs through type-coercion (as an option), but we haven't added 
that feature just yet.

>   On the other hand it might be useful to
> be able to pass a separate (potentially additional) context to the expand(),
> frame(), normalize(), and triples() functions.

I didn't understand the utility of this right away, but after thinking 
about it I suppose the purpose is to just add context to inputs that do 
not have any without making other API calls? For example, in the 
expand() case, you might provide an external context that defines (or 
redefines) the terms in your input -- so that the final expanded output 
looks different from how it would otherwise. Is this the behavior that 
you intended to support by adding the additional parameter? If so, I 
believe the same behavior can be accomplished by first calling expand() 
and then setting @context to the separate context, and then calling the 
other method(s).

Perhaps it is worth the convenience of doing what you want in a single 
API call (or perhaps not). I'm a bit ambivalent about it at the moment. 
If you think we need to support merging contexts (when you said 
"potentially additional" this is how I read it), then maybe we should 
expose a function to do just that in the API. However, I suspect that 
such a method would only be used to combine contexts where you knew what 
the resulting context would look like; otherwise you wouldn't know how 
to work with your output.

> In section 6.9.1 Compaction Algorithm: Is the first step really to perform
> the Expansion Algorithm? If that's really what's intended (it somehow makes
> sense) we should describe in one or two sentences why.

Yes, it is -- I added some language to the spec that hopefully better 
explains it. The compaction algorithm begins by "cleaning the slate" and 
removing any existing context so that it can then apply the new one that 
has been provided.

> In section 6.12 Data Round Tripping. Why is a double normalized to "%1.6e"?
> Is there a special reason or was this just arbitrarily defined?

The reason is that JSON parsers will read in doubles and store them in 
native form. To round-trip them in a normalized way, we define how they 
are output to avoid issues with the variety of formats for outputting 
doubles. We picked the printf format "%1.6e" because it is easily 
implemented in many different languages. We should be more clear about 
this in the spec.


[1] http://lists.w3.org/Archives/Public/public-linked-json/2011Aug/0078.html

-- 
Dave Longley
CTO
Digital Bazaar, Inc.

Received on Wednesday, 31 August 2011 05:23:34 UTC