RE: JSON-LD Telecon Minutes for 2013-05-14 / RDF-ISSUE-128 and RDF-ISSUE-129

On Sunday, May 19, 2013 5:21 PM, Gregg Kellogg wrote:
> > We did that quite some time ago but decided to move the conversion
> between native types and strings to to/from RDF because that's where
> the complexity has to live. If you are staying within JSON-LD you
> shouldn't have to worry (or even know) about that.
> 
> Yes, we did discuss it before, but I think that the RDF round-tripping
> loss, and general need for developers to be able to use native types,
> have come into perspective. By always doing a full value-object based
> transformation of typed literals, we eliminate the data loss issue, but
> make it less convenient for developers to just use the native types.

You don't eliminate it, you just move it somewhere else. 


> The problem is, that as a publisher, you really don't know what the
> intention of the consumer is, so imposing some behavior, through
> defaults in the core transformation algorithms, can lead to a bad
> experience. If it's really for the convenience of the developers, then
> doing it through algorithms run on the consumer side seems to really
> fit this target, and allows us to be more comprehensive.

Not sure I buy that argument. You seem to assume that consumers will always
expand/compact but never convert from or to RDF. I think that oversimplifies
it too much. I do assume (and hope) that users of JSON-LD will use native
types in 99.9% of the cases. Only for very special use cases (money) they
might use typed strings instead.


> My thought was that the fromRdf algorithm would always use full value
> objects for typed literals (e.g., like _use native types_=false). An
> added flag to expansion, passed through from compaction, flatten and
> framing, would allow value objects with numeric or boolean XSD types,
> to be transformed to the native JSON representation, or vis-versa. This
> means that there could be data loss, but this would happen only in the
> client application, where the application is in full control of the use
> of the data.

Why remove it from fromRdf? A "client application" could call that algorithm
just as well. 


> More specifically, in the Expansion Algorithm after step 8.4:
> 
> [[[
> If the _use native types_ flag is true, the value of result's @value
> member is a string, and the value of result's @type string is
> xsd:boolean, xsd:float, xsd:decimal, xsd:double or a type derived from
> any of these datatypes (as defined in section 3 of [XSD]), transform
> the string into a native representation (using language currently in
> section 10.6).
> 
> Otherwise, if the _use native types_ flag is false and the value of
> result's @value member is a native value using canonical representation
> for the datatype in result's @type member, defaulting to xsd:boolean or
> xsd:double, depending on if value is a boolean or number.
> ]]]
> 
> At one time, all this was encapsulated in the Value Expansion
> Algorithm, but now it doesn't seem to be. Does this step capture all
> the places where a value might need to be transformed? If not, then we
> should consider re-writing Value Expansion and using that in the
> different locations.

It does except if your value is already expanded, i.e., it is using @value
already.


> Alternatively, we could consider doing this transformation as part of
> the Compaction or Value Compaction Algorithms, but Expansion seems
> better to me.

That's exactly the problem is see. If we move these kind of transformations
to compaction/expansion you will probably lose even more information.
Currently, I can mix native types and typed strings in JSON-LD, e.g. I could
use native booleans but literal doubles. I don't want that to be messed up
when expanding/compacting because I'm staying within JSON-LD. Since RDF does
not have any native types, I obviously have to accept the fact that
everything will become a typed literal when converting to RDF and I may not
be able to round-trip it cleanly in such cases.

I just don't think it makes sense to move this complexity away from the RDF
conversion algorithms. The "problem" lies there and should thus be handled
there. JSON itself has no range or precision restrictions at all. If this
turns out to be a problem in the future parsers will handle it more smartly.
I just happens that JSON works just fine for 99% of the use cases and such
problems never arise. We already have an elegant solution for the other
cases as well. You express your data as a string and add a type to it.

As it currently stands, I would -1 the proposal to move the transformation
to expansion/compaction.


> >> Sandro Hawke:  we many need to refer to a different spec
> >>   regarding futures - the DOM WHATWG one might change.
> >> Sandro Hawke:  hard requirement is to refer to stable things. it
> >>   is hard to argue that the "living spec" is stable
> >>   ... not saying we change the reference, but change how we use
> >>   the reference
> >> Manu Sporny:  We don't actually reference the Futures spec
> >>   directly. We only use the Future concept in our spec, not the API
> >>   itself.
> >> Sandro Hawke:  if they change Futures, then every piece of
> >>   software using futures would be broken and have to change
> >> Manu Sporny:  Being pedantic, but the spec wouldn't change, just
> >>   the implementation.
> >> Sandro Hawke:  The director probably won't be okay with that. You
> >>   shouldn't build on specs that are not stable
> >>   ... We have to hard-code it with the current view of futures
> >>   so that if it changes, we use the old version of futures
> >
> > Hmm... that kind of surprises me. JSON itself e.g. is not a IETF
> standard but just an informational note. HTML5 is referencing a large
> number of living standards:
> >
> > http://www.w3.org/html/wg/drafts/html/master/iana.html#references
> >
> > In this case I think it makes no sense to hardcode the reference to a
> specific version because as Manu says, we just use the concept of a
> Future. The JSON-LD API should be based on what browser vendors
> implement - and that will be the WHATWG living standard.
> 
> The difference between DOM and JSON is that JSON quite stable, whereas
> who knows how much change might come to DOM?

Fair enough, but does that really affect us?

Promises/Futures have been known since the 70ies so the concept is
definitely not new. Our dependency is so loose that only a name change
("Future" to something else, e.g. "Promise") would affect us. It is true
that implementations may have to be updated if the DOM spec changes but the
idea behind that spec is to describe what browsers vendors implement, so it
is actually the other way round.



--
Markus Lanthaler
@markuslanthaler

Received on Monday, 20 May 2013 00:11:34 UTC