W3C home > Mailing lists > Public > public-hydra@w3.org > June 2014

RE: Reinventing Web applications

From: Markus Lanthaler <markus.lanthaler@gmx.net>
Date: Sun, 22 Jun 2014 23:26:12 +0200
To: <public-hydra@w3.org>
Cc: <public-declarative-apps@w3.org>
Message-ID: <06d001cf8e60$982ca120$c885e360$@gmx.net>
On 21 Jun 2014 at 15:36, Martynas Jusevičius wrote:
> On Sat, Jun 21, 2014 at 2:41 PM, Ruben Verborgh wrote:
>> I'm afraid I don't fully understand yet.
>>> You are right, our work is mostly about server side.
>> ”server side": does it mean
>> internal things that are not visible from the outside,
>> or (an) external interfaces() offered by a server?
> About how a Linked Data server should translate requests and responses
> to and from a SPARQL server.
> The exact mapping is application-defined in RDF form, so it can be
> easily published alongside the main data.

Speaking for a developer's perspective, you describe the behavior of simple controllers in RDF. Since your backend is assumed to be a quad store and you serve RDF you also mostly eliminate the Model in MVC (and thus ORMs). Is that correct?

>>> We want to empower developers, so that they would be able to build
>>> full-featured Linked Data (and Web in general) applications by writing
>>> much less code, ideally none, and managing them as data instead.
>>> are not so much
>>> concerned about the so far hypothetical Web where every client can
>>> interact with every server.
>> But those goals are somehow related right?
>> Not having to write much code to interact with any server X,
>> seems close to having every client interact with every sever.
> First we want to specify the standard way triplestore-based Linked
> Data servers work, as this goal is much more realistic in our view.

How can you possible standardize that if you don't limit yourself to very simple models such as CRUD applications? Creating a resource in a triple store and ordering a product are, IMO, two completely different things. The client acting on behalf of the user doesn't care at all about the former but cares a lot about the latter.

> To achieve better interoperability between servers and clients, the
> software agents need to become smart, i.e. much better at semantics.
> Because currently there is largely only syntactic difference between
> OPTIONS result and Hydra's allowed Operations, and between
> SupportedProperties and SPIN constraints. Regardless of the syntax, my
> agent still would not *automatically* know how to interact with the
> service based on these constraints.

It depends on what exactly you mean by "automatically". Sure, it doesn't start to do things by itself. But I would find it very valuable if I could program my client on a higher level of abstraction (go and see if you find product X for a price less than Y, if you do and it is available, order it) and independently of the API I'm interacting with (now do the same with this and that API).

>>> What we have is a generic read-write processor implementation [1], on
>>> which declarative application descriptions can be run in RDF form.
>> So if I understand this right, it is about server-side components
>> that have similar observable behavior on the other side of the HTTP connection,
>> but different inner workings? 
> Hmm.. yes and no :) It is about server-side components that have
> similar behavior such as support for pagination, accepting RDF input,
> ACL etc. The inner workings are not different as the base processor is
> finite and generic, but it is the per-application descriptions
> (containing URI templates, SPARQL templates etc.) that make those
> components respond differently.

Pagination is an interesting example. I had a look at the spec at


You define how to describe it to the server but don't specify at all how the data is being exposed. All you say is

    Page description SHOULD include descriptions of the container
    and previous/next pages, as required by HATEOS [REST].

Now, this is very vague and wouldn't be enough to build interoperable systems. Would this be something where Hydra could help?

>> But why does this need standardization then?
>> Who needs to talk to the inner system of the server?
> It needs standardization to achieve interoperability between the
> processor and the declarative applications and their descriptions.

That's interesting. So your hope is that other people will built processors and that you can move application descriptions seamlessly between those processors, right?

> Try to look at it as an analogy to XSLT processing. The processor can
> be written in different languages, for different platforms and
> implemented in different ways, but the standard ensures that
> processing data on any of the processors would give you the same
> result. In this case, running the same declarative description, your
> Linked Data server would respond in the same way, regardless of the
> processor. Does that make more sense?

Yeah, makes sense but it also makes it more a software specification than a Web specification. Why did you decide to start the effort to standardize this now? Are other people implementing processors which do not fully interoperate with yours?

> Does that answer some of your questions? :)

Yes, thanks for helping me understand what you are doing Martynas.

Markus Lanthaler
Received on Sunday, 22 June 2014 21:26:43 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 15:53:59 UTC