Re: Spec review/pull request

Markus, I think your edits are great and should be merged. Note that you're listed as an owner, and can do the merge yourself.

As for the other issues, I think they're mostly on point and we should discuss further.

Gregg

On Aug 30, 2011, at 9:49 AM, Markus Lanthaler wrote:

> Hi all,
> 
> Over the weekend I've reviewed the spec made a number of changes to it. I've
> already sent a pull request and I've also annotated most of the changes in
> one of the commits on GitHub since it's too much to be explained in a mail:
> 
> https://github.com/lanthaler/json-ld.org/commit/84b291cc306c27973f1b6817fd25
> 326030ef5312
> 
> 
> Nevertheless I would like to explain some of the changes here and add a few
> further possible enhancements.
> 
> In section 2.1 Goals and Rationale "streaming" is mentioned. How should that
> work? What's the motivation? What are the use cases? If we leave that in the
> spec we should elaborate this a bit.
> What are the *three* keywords a developer needs to know for JSON-LD (2.1
> Simplicity). Should we list them there?
> 
> 
> In section 2.2 Linked Data we start talking about
> subjects/properties/objects without mentioning triples. This might be a bit
> confusing as there is no need to distinguish subjects from objects if you
> don't mention that it's serialized in the form of SPO triples.
> I'm also not 100% convinced if we should talk about
> subjects/properties/objects at all. Properties are normally referred to as
> predicates and there is some ambiguity when talking about "objects" since
> there also exist JSON objects. Talking about entities/attributes/values
> might avoid some of the possible confusion.
> I would also propose to add a little figure showing a graph to this section
> to illustrate the underlying data model.
> 
> 
> In section 2.3 Linking Data it is mentioned "good Linked Data". What's good
> Linked Data? What's bad Linked Data?
> 
> 
> I've merged sections 2.4 The Context and 2.4.1 Inside a Context as I found
> it rather difficult to understand it the way it was. There was no logical
> thread and quite a few repetitions.
> I would also propose to change, e.g., the "avatar" key to "image" so that
> not all JSON terms map directly to a term in a Web Vocabulary as this might
> generates wrong assumptions.
> 
> 
> In Section 2.5 From JSON to JSON-LD: Shouldn't we mention that the subject
> is missing and that this example will result in an blank node?
> 
> 
> I think Section 3.1 IRIs should be elaborated. The first list, and there
> especially point 1, is not really clear to me. How do we distinguish between
> IRIs in keys and terms that are not IRIs?
> 
> 
> In section 3.11 Framing: is the output always automatically compacted? If
> so, we should mention that.
> 
> 
> In section 3.1 CURIEs there is written "the @vocab mechanisms is useful to
> easily associate types and properties with a spec. vocabulary". Shouldn't
> that be @context?
> 
> 
> I've changed the API interface in section 5. The Application Programming
> Interface so that all methods have a nullable context parameter. In
> compact() it wasn't nullable before but nothing prevents a user to have the
> context specified inline (in input). On the other hand it might be useful to
> be able to pass a separate (potentially additional) context to the expand(),
> frame(), normalize(), and triples() functions.
> I've also changed the capitalization from JSONLDProcessor to JsonLDProcessor
> to make it more readable. Feel free to revert this.
> 
> 
> In section 6.3 Context list item 3.2: @vocab value MUST be an absolute IRI,
> why do we need to perform IRI expansion then?
> 
> 
> In section 6.3.2 Initial Context "Processors MAY provide means of setting
> the base IRI programatically" - shouldn't that MAY be SHOULD?
> 
> 
> In section 6.9.1 Compaction Algorithm: Is the first step really to perform
> the Expansion Algorithm? If that's really what's intended (it somehow makes
> sense) we should describe in one or two sentences why.
> 
> 
> In section 6.12 Data Round Tripping. Why is a double normalized to "%1.6e"?
> Is there a special reason or was this just arbitrarily defined?
> 
> 
> Is Appendix C Mashing Up Vocabularies really needed? That's already
> described in the spec. If we really decide to leave it in the spec we should
> at least include the context in the first example.
> 
> 
> 
> After reading the spec I've also come up with a number of questions and
> proposals which I will send in separate mails. This mail is already to long
> so that I fear no one will actually read it anyway :-P
> 
> 
> Cheers,
> Markus
> 
> 
> 
> --
> Markus Lanthaler
> @markuslanthaler
> 
> 
> 
> 

Received on Tuesday, 30 August 2011 17:45:12 UTC