Re: Reinventing Web applications

On Mon, Jun 23, 2014 at 5:47 AM, Ruben Verborgh <ruben.verborgh@ugent.be> wrote:
> Hi Martynas,
>
>
> So but… it's internal stuff then, i.e., not observable for the client.
> Why would this need to be published?

Why not observable for the client? Clients can use the application
description to know which resources are containers/pages, what types
do containers accept, what are the data quality constraints for
incoming data, etc. They need to follow the specification as well.

>
>> First we want to specify the standard way triplestore-based Linked
>> Data servers work, as this goal is much more realistic in our view.
>
> “triplestore-based" gives me the same feeling again;
> this seems like something that is private to the server,
> so I don't fully understand why this needs to be specified
> rather than to be turned into a software framework.

Again, use the XSLT analogy. Both the vocabulary and the processor
rules can be standardized, whether clients use it or not.

We don't want to call Graphity a "framework", as this term suggests it
has to be extended to build an application. We on the other hand want
to make it as finite as possible (the imperative Java part, that is),
and move logic to data instead.

>
>> Regardless of the syntax, my
>> agent still would not *automatically* know how to interact with the
>> service based on these constraints.
>
> Does declarative-apps do things for clients as well, or only servers?

The same Graphity instance can be a server and a client. In a way, it's a proxy.

If you feed a URI of Graphity server into Graphity working as a
client, it could retrieve the description and discover containers,
check what RDF types they accept, ask for permission if access is
forbidden to some resource, etc. It could even discover and download
frontend XSLT stylesheets, and render the remote data on client-side.

That's the idea at least :) So far the client capabilities are limited
to generic Linked Data browsing. The prerequisite is that both the
server and the client follow the same specification.

>> Object-relational mapping is also a popular component despite the
>> obvious impedance mismatch [1].
>
> Sure, but it's not the only way to deal with relational databases,
> and again, not observable.

One of our goals is to advance the software design of Web
applications. That is orthogonal to the observability by the client.

>
>> RDF is a much more natural data model for the Web, given its built-in
>> URIs, effortless merge operation, genericness etc.
>
> Does that fact that a server *internally* works with RDF
> make an observable difference for clients?

These properties are not internal, they are unique to the RDF data
model and therefore global. We assume both servers and clients are
based on RDF, either natively or indirectly.

>
>> Hmm.. yes and no :) It is about server-side components that have
>> similar behavior such as support for pagination, accepting RDF input,
>> ACL etc.
>
> Aha, so… you're defining how clients can send RDF for input
> and interpret it for ACL?
> Are we talking about specific document types for input and ACL?

Yes, clients know they can POST RDF to containers, for example. But
the request will likely be rejected, if they don't check what
properties are required in the constraints. That is why it is
important for servers and clients to follow the same spec and
interpret the description in the same way.

Any RDF serialization should be acceptable, we're not defining any new
media types. We are using the W3C vocabulary for ACL [1].

> Ah okay. So you have some configuration file(s) for a server,
> this is what you standardize, and then implementations bring this server to life?

Yes.


Martynas

[1] http://www.w3.org/wiki/WebAccessControl

Received on Monday, 23 June 2014 15:42:24 UTC