Re: Reinventing Web applications

On Sun, Jun 22, 2014 at 11:26 PM, Markus Lanthaler
<markus.lanthaler@gmx.net> wrote:
> On 21 Jun 2014 at 15:36, Martynas Jusevičius wrote:

>> About how a Linked Data server should translate requests and responses
>> to and from a SPARQL server.
>> The exact mapping is application-defined in RDF form, so it can be
>> easily published alongside the main data.
>
> Speaking for a developer's perspective, you describe the behavior of simple controllers in RDF. Since your backend is assumed to be a quad store and you serve RDF you also mostly eliminate the Model in MVC (and thus ORMs). Is that correct?

Correct. Or you can say, the triplestore is the Model. In any case,
the object-oriented Model is fully eliminated, and that makes a huge
difference.

>> First we want to specify the standard way triplestore-based Linked
>> Data servers work, as this goal is much more realistic in our view.
>
> How can you possible standardize that if you don't limit yourself to very simple models such as CRUD applications? Creating a resource in a triple store and ordering a product are, IMO, two completely different things. The client acting on behalf of the user doesn't care at all about the former but cares a lot about the latter.

We do limit ourselves to simple container/item models and CRUD
operations. That is why HTTP and REST have been successful.
What do you think about LDP? They are trying to do a similar thing,
but in a wrong way.

And how exactly is creating a resource different from ordering a
product? Don't our actions on Web services eventually translate to
state changes?

If end-user clicks a button on a form, and that action creates an
order in RDF, is that simply creating a resource or ordering a
product? In my world it's both.

>> To achieve better interoperability between servers and clients, the
>> software agents need to become smart, i.e. much better at semantics.
>> Because currently there is largely only syntactic difference between
>> OPTIONS result and Hydra's allowed Operations, and between
>> SupportedProperties and SPIN constraints. Regardless of the syntax, my
>> agent still would not *automatically* know how to interact with the
>> service based on these constraints.
>
> It depends on what exactly you mean by "automatically". Sure, it doesn't start to do things by itself. But I would find it very valuable if I could program my client on a higher level of abstraction (go and see if you find product X for a price less than Y, if you do and it is available, order it) and independently of the API I'm interacting with (now do the same with this and that API).

So far the smartest agents are humans. They interact with the Web
applications as end-users via browsers, or as developers via APIs.
I know I can translate user input on the browser into RDF state change
on the triplestore (Graphity is read-write). In the same way, a
software agent can act on user's behalf to achieve the same goal, if
it follows the specification and understands the application
description.

>
> Pagination is an interesting example. I had a look at the spec at
>
>    https://github.com/Graphity/graphity-browser/wiki/Linked-Data-Processor-specification
>
> You define how to describe it to the server but don't specify at all how the data is being exposed. All you say is
>
>     Page description SHOULD include descriptions of the container
>     and previous/next pages, as required by HATEOS [REST].
>
> Now, this is very vague and wouldn't be enough to build interoperable systems. Would this be something where Hydra could help?

The implementation is currently ahead of the specification. That's why
I wanted to bounce some ideas and then develop the spec further.

>>> But why does this need standardization then?
>>> Who needs to talk to the inner system of the server?
>>
>> It needs standardization to achieve interoperability between the
>> processor and the declarative applications and their descriptions.
>
> That's interesting. So your hope is that other people will built processors and that you can move application descriptions seamlessly between those processors, right?

In the long term, yes. In the short term, we want to be able to manage
our applications as data (plus some UI templates) instead of having to
write imperative code. That brings substantial time and cost savings.

> Yeah, makes sense but it also makes it more a software specification than a Web specification. Why did you decide to start the effort to standardize this now? Are other people implementing processors which do not fully interoperate with yours?

I'm not sure I know where the line goes between a software
specification and a Web specification. I'm thinking XSLT, RDF, SPARQL,
LDP -- they're all simply W3C specs to me.

We want to standardize to show Web applications can be designed in a
declarative way. We're not creating a new technology here, we're just
filing in the gaps between existing ones.

We want to share this approach openly and formalize it at the same
time. We don't know any alternative processors yet, but we would
surely welcome them.


Martynas

Received on Monday, 23 June 2014 14:50:43 UTC