RE: Timeline and priorities

> > I think we should decouple the progress of the syntax and the API a
> > bit more.
> 
> The short answer is - we can't. The syntax specification by itself
> doesn't provide enough detail to implement conformant, interoperable
> JSON-LD processors.
> 
> That is, many of the implementation details for JSON-LD are in the
> JSON-LD API spec. This was done on purpose in order to give readers a
> lighter introduction to JSON-LD. The down-side is that we can't just
> take the JSON-LD Syntax document to REC by itself. One alternative is
> to
> split the API spec into two parts - the part that defines the API and
> the part that explains how to process the context... but even that is a
> difficult line to draw and results in yet another spec that
> implementers
> must go and look at.

I was more thinking of including all the necessary information to parse a
JSON-LD document into the syntax document. Compaction, framing, and
normalization would not be required. They could remain in the API doc where
also, well, the API is described. I often find it confusing myself to have
to dig through the API spec to understand how to interpret certain aspects
of a JSON-LD document. I do not expect anybody expect API developers to look
into the API spec so I think all the required information to understand a
JSON-LD document should really be in the syntax spec.


> Ideally, authors would look at the JSON-LD syntax specification while
> implementers would look at the JSON-LD syntax specification, JSON-LD
> API
> specification, and RDF Normalization spec. So, all of the specs are
> coupled together.

Exactly! I think it wouldn't be that difficult to make that happen.


> > I have to confess haven't yet had a deep look at framing and
> > normalization but since it's a brand new spec which implements a
> > thing that is quite different from existing approaches I wouldn't
> > rush too much on specifying the API.
> 
> I understand your hesitation, but let's look at the flip-side. To you,
> JSON-LD is a brand new spec... but to us (Digital Bazaar), it's many
> years old at this point. We've been using variations of it to build our
> products for that time period. I think a better approach would be for
> those in the group to come up to speed on framing and normalization and
> provide technical input on the algorithms presented. While it's good to
> be cautious about standardizing APIs too soon, it's also bad to put off
> standardizing APIs based on unfamiliarity with the solution.

What I meant was that framing is something that isn't used widely so I'm not
sure if it's the right approach to take without looking at alternatives. The
first one coming to my mind is JSONPath (XPath for JSON) for example.
Normalization is different as it is clearly the only choice for some use
cases (digital signatures being the most prominent one).


> To come at it from another direction, this has been the argument to
> /not/ standardize RDF normalization for over 7 years now and the result
> is... no standard way to normalize RDF. At some point we have to accept
> that the solution that we create is non-optimial and do our best to
> make
> sure that what we create is the best that we can do.

This is true for normalization but not for framing. There are dozens of
possible approaches which are already widely used in practice to achieve the
same as with framing.


> > We don't even know yet how people end up using JSON-LD.
> 
> ... and we won't until it's at REC. :)

True. But that just applies to the syntax spec.


> This is the case with all Internet technologies... you don't truly
> understand how people use the technology until it is in broad adoption
> and to get broad adoption, you need to get to REC. HTML5 is a rare
> exception, but remember that it's the 5th iteration of the language...
> you have to say "we're done with this version" and hit REC at some
> point
> to find out how people use the technology you developed.

And yet there are very few AFAIK that specified the API at the very
beginning...


> > So especially framing (or other query mechanisms) might be something
> > where we might need to spend much more time on.
> 
> I hope not... the algorithm is there, several implementations are
> available for people to test drive... what we really need is for people
> to familiarize themselves with the API and start writing a few
> applications to take it for a test drive. That will hopefully educate
> people enough such that they either start feeling comfortable with the
> API or provide technical feedback on how to change the API.

I promise I will try to "come up to speed" on this as soon as possible :-)


> >> The RDF Graph Normalization document is the document that needs
> >> the most work.
> >
> > The question here is if we completely separate it from JSON-LD or
> > not, see also ISSUE-53.
> 
> I would, personally, be very much opposed to this. We (the semantic web
> / linked data community) need to address this 7+ year old issue... we
> have a chance with JSON-LD, a proposal w/ multiple implementations has
> been put forward... the question should be: What are the technical
> issues?

This is a different issue, let's discuss that separately.


> > Why can't we do that immediately after the syntax spec is complete?
> 
> Hopefully, I answered this question above... Syntax spec is dependent
> on at least the API spec.

That doesn't have to be the case as I've argued above.


--
Markus Lanthaler
@markuslanthaler

Received on Thursday, 12 January 2012 15:06:11 UTC