Re: Reinventing Web applications

On Sat, Jun 21, 2014 at 2:41 PM, Ruben Verborgh <ruben.verborgh@ugent.be> wrote:
> Hi Martynas,
>
> I'm afraid I don't fully understand yet.
>
>> You are right, our work is mostly about server side.
>
> ”server side": does it mean
> internal things that are not visible from the outside,
> or (an) external interfaces() offered by a server?

About how a Linked Data server should translate requests and responses
to and from a SPARQL server.
The exact mapping is application-defined in RDF form, so it can be
easily published alongside the main data.

>> We want to empower developers, so that they would be able to build
>> full-featured Linked Data (and Web in general) applications by writing
>> much less code, ideally none, and managing them as data instead.
>>
>> are not so much
>> concerned about the so far hypothetical Web where every client can
>> interact with every server.
>
> But those goals are somehow related right?
> Not having to write much code to interact with any server X,
> seems close to having every client interact with every sever.
>

First we want to specify the standard way triplestore-based Linked
Data servers work, as this goal is much more realistic in our view.

To achieve better interoperability between servers and clients, the
software agents need to become smart, i.e. much better at semantics.
Because currently there is largely only syntactic difference between
OPTIONS result and Hydra's allowed Operations, and between
SupportedProperties and SPIN constraints. Regardless of the syntax, my
agent still would not *automatically* know how to interact with the
service based on these constraints.

>> It boils down to replacing obsolete Web architecture components such as RDBMSs
>
> In what sense are RDBMS Web architecture components?
> It what sense are they obsolete?
>

In the sense that relational DBs are still used by majority of Web
applications and frameworks despite being defined decades before the
Web. Object-relational mapping is also a popular component despite the
obvious impedance mismatch [1].
RDF is a much more natural data model for the Web, given its built-in
URIs, effortless merge operation, genericness etc.

>> What we have is a generic read-write processor implementation [1], on
>> which declarative application descriptions can be run in RDF form.
>
> So if I understand this right, it is about server-side components
> that have similar observable behavior on the other side of the HTTP connection,
> but different inner workings?
>

Hmm.. yes and no :) It is about server-side components that have
similar behavior such as support for pagination, accepting RDF input,
ACL etc. The inner workings are not different as the base processor is
finite and generic, but it is the per-application descriptions
(containing URI templates, SPARQL templates etc.) that make those
components respond differently.

> But why does this need standardization then?
> Who needs to talk to the inner system of the server?

It needs standardization to achieve interoperability between the
processor and the declarative applications and their descriptions.

Try to look at it as an analogy to XSLT processing. The processor can
be written in different languages, for different platforms and
implemented in different ways, but the standard ensures that
processing data on any of the processors would give you the same
result. In this case, running the same declarative description, your
Linked Data server would respond in the same way, regardless of the
processor. Does that make more sense?


[1] http://en.wikipedia.org/wiki/Object-relational_impedance_mismatch

Received on Saturday, 21 June 2014 13:36:50 UTC