JSON-LD 1.1 Design Principles - brainstorm

Dear all,

As part of our work over the next couple of years, we will have to
determine whether some feature should be added to the specifications or
not.  The earlier that we can come to some degree of consensus on how to
make those subjective calls, the easier our discussions will be.  I have no
illusions that we will be able to come to a deterministic process, but at
least we can discuss some guiding principles to apply.

It would be great if we could try to brainstorm on the list some principles
from our own experiences of using and implementing JSON-LD (or similar
technologies), and then discuss them on the call on Friday.

Not as chair, just to start the ball rolling, some of the patterns that I
have found very useful in other efforts:


* We follow our overall mission of making production and consumption of
linked data as easy as possible for the widest variety of web developers,
with or without any experience of the underlying graph models.

What it says on the tin :)

* Require real use cases, with actual instance data.

If decisions are tied to real world use cases, with actual data that you
can point at, then that keeps the entirely theoretical, never useful in
practice, features at bay.

* Require two organizations [not necessarily WG participants, or even W3C
members] to have the use case.

Just because one organization has a use case, doesn't make it good for
interoperability.  A minimum of two organizations should support every new
feature, though that doesn't have to be via membership on the WG.

* "As simple as possible, but no simpler."

A simpler solution is better than a more complicated one that achieves the
same ends.

* Consistency is simpler than exceptions.

20 inconsistent but individually easy to understand solutions are worse
than 2 more complicated but all encompassing solutions.

* Given the option, optimize for data producers and consumers before
library implementers.

Or, simplicity / usability is determined by the audience (data producers
and consumers) not by the specification text. There will be MANY more data
producers than consumers, and MANY more consumers than library
implementations. Thus we should make it was easy as possible to create data
and consume it, at the expense of a more complicated specification.

* Provide on-ramps.

A solution that can be implemented in incremental stages is better than a
solution that is all or nothing, as not everyone needs every feature but
many people need various parts.

* Define success, not failure.

We should define things in terms of what it means to be conformant, rather
than what is not conformant.  The fewer constraints we require, the easier
to have non-breaking changes in the future and the easier it is to have
experimentation.

* The underlying data model is RDF.

If a feature comes up that can't be modeled with RDF as the underlying
abstract data model, then we refer the feature to a future RDF WG for
potential inclusion at that time. Similarly, we should ensure that the
features of RDF are expressed in JSON-LD, to ensure that data can be
round-tripped with confidence through different serializations.

* Follow existing standards and best practices, where possible and where
they do not conflict with other principles.

Between invention and reuse, pick reuse... unless that reuse would
demonstrably harm adoption by being more complicated than necessary.


Thoughts? Further ideas for principles to discuss?  How did the 1.0 WG make
scoping decisions, and was that process seen as effective and fair?


Rob

-- 
Rob Sanderson
Semantic Architect
The Getty Trust
Los Angeles, CA 90049

Received on Tuesday, 3 July 2018 22:08:42 UTC