- From: Karol Szczepański <karol.szczepanski@gmail.com>
- Date: Sat, 18 Jun 2016 21:00:47 +0200
- To: "Dietrich Schulten" <ds@escalon.de>, "Tomasz Pluskiewicz" <tomasz@t-code.pl>
- Cc: "Hydra" <public-hydra@w3.org>
Hi all >>> I mean that there is little value in implementing the client so that >>> in only ever processes JSON-LD. Internally too >> >> how does the client use rdf internally, as opposed to just using a >> serialization of rdf? Does it *need* to read incoming responses into a >> serialization-agnostic internal rdf model? > >As I wrote, I find it quite cumbersome to handle JSON-LD in some scenarios. >And it is RDF already so why not work with triples to process the >ApiDocumentation and representations? >As a side effect support for RDF/XML, Turtle etc. comes out of the box. Agree on that - pure JSON-LD regardless context is sometimes cumbersome. Following thread... >> It might be sufficient to operate on the message >> directly to find hydra information. E.g. there could be a pluggable >> json-ld and turtle OperationFinder and a pluggable json-ld and turtle >> LinkFinder - that should be sufficient for "browser engine" operations. > >Why separate OperationFinder/LinkFinder for JSON-LD and Turtle? The data >model is the same, just serialized differently. That's why I prefer to >normalize to in-memory RDF representation. ... I agree also on that. Hydra is expressed in RDF, thus we should treat it as one. Raw JSON[-LD] is not an answer to all evil in the world. Personally I see these possibilities: - transforming received payloads into an in-memory triples model - generic, but cumbersome for analysing what's actually inside - transforming received payloads into an in-memory ORM-like model - some precomputations in the translation stage will make the model easier to use programmatically (i.e. flattened class hierarchies, resolved constraints, etc.) - hybrid of both - translate to in-memory triples that can be used by some RDF library to discover indirect statements and transform to flattened ORM like data model that can be easily used internally and by the client. >> Actually working with the non-hydra data the client receives is out of >> scope for the client, don't you think? The component which *uses* the >> hydra client would read a client result into a triple store and reason >> over the data if that is what it needs - not the hydra client. > >Ah, see that is where we differ it seems. For me the consumer should be >RDF-agnostic by default. So despite the fact that the client library does >some RDF processing, I'm returning plain JavaScript objects so that it's >easier to handle with JavaScript (my personal requirement is declarative >data binding with Polymer, which doesn't like URI keys in JSON for one). I disagree with both of you in this case. While I'd love to see hydra browser handle both RDF and old school JSON payloads (which is what I'm trying to achieve with my URSA project), I wouldn't translate RDF payloads to JSON structures as I believe it would be against ReSTful approach. In my opinion, hydra browser should be transparent to the client. Server sends RDF payload, hydra browser receives it, extracts hypermedia controls and data contracts and leaves pure data untouched to the client. Or with hypermedia controls left as well - to be discussed. Client may wan't to receive the original payload or at least RDF data cleaned from the meta-data as a resource expected that it want's to modify and send back. In general - If I am to make the idea popular among corporate customers I work with, I need an advantage of being "compatible with the old ways". Most the the business doesn't want to hear about experimental approaches unless they are global buzz-words, which is not the case for Hydra. >Requiring developers to be familiar with RDF etc. will not make Hydra >popular. We should model it with RDF in mind, yes, but proper tools for >both client and server side should IMO try to hide the RDF nature of Hydra. >I think this is the only way for Hydra to gain wider adoption. As I wrote - if hypermedia controls are embedded in RDF as RDF statements, I don't see any point in hiding this fact from the client. Other thing is on how to smuggle hypermedia controls for non-RDF payloads - I see few possibilities: - mixed/* content responses with one part with data of any format and another part with RDF hypercontrols on board - not really common way of communication but doable and compliant with HTTP in general - hypermedia controls over headers - quite limiting but for few cases will work - I try to utilize HTTP 206 partial content with custom entities range for partial collection views - some frankenstein injections into the data payload - I don't see any reasonable way of implementing it for various formats >> What I have in mind are small devices or SPA browser applications which >> use the client for API access. Hence the desire to stay lightweight. >> Requiring a javascript rdf library doesn't sound lightweight to me. > >This may be the only practical reason. On the other hand I don't want to be >overly optimizing just yet. rdf-ext gets me where I need quickest. If it >proves too big or too slow then will be the time to a better solution. For >now I prefer a feature-complete solution at the cost of an external >dependency. Agree on that. Myself seeing "front-end" applications that carry about 8MB of minified JavaScript code with application and all of it's dependencies bundled, makes me ignorant to issues of being "lightweight". You can have a hydra browser lightweight, but I don't expect it to be used in an environment that can say the same. Best, Karol
Received on Saturday, 18 June 2016 18:58:44 UTC