Re: Reinventing Web applications

Hi,

> Why not observable for the client?

The original context was:

>>> About how a Linked Data server should translate requests and responses
>>> to and from a SPARQL server.
>> 
>> So but… it's internal stuff then, i.e., not observable for the client.

So the server knows how to translate from a SPARQL backend. Good.
But why would a client need to know this?
For a client, it shouldn't matter whether the data comes from a SPARQL endpoint,
a relational database, a graph database, or some other thing.

>  They need to follow the specification as well.

Could you perhaps give an example f how a server and client follow the spec?
It seems restrictive to me that clients should also follow an application-level spec;
I can see why clients follow protocol specs such as HTTP.

> We don't want to call Graphity a "framework", as this term suggests it
> has to be extended to build an application.

How would you call it?

> If you feed a URI of Graphity server into Graphity working as a
> client, it could retrieve the description and discover containers,

Essentially, discovering the structure of the server's resources.
How does it differ from Hydra in that aspect?

> ask for permission if access is forbidden to some resource, etc.

Ah, this is something that's not in Hydra Core.

> So far the client capabilities are limited
> to generic Linked Data browsing. The prerequisite is that both the
> server and the client follow the same specification.

Mmm, perhaps we just have different definitions of "follow".
If it means the client can interpret the vocabulary, yeah.

>>> Object-relational mapping is also a popular component despite the
>>> obvious impedance mismatch [1].
>> 
>> Sure, but it's not the only way to deal with relational databases,
>> and again, not observable.
> 
> One of our goals is to advance the software design of Web
> applications. That is orthogonal to the observability by the client.

I didn't mean observability as a goal.

What I meant is:
clients on the current Web don't observe (and they shouldn't)
whether the server they access has been implemented
with object-relational mapping, static HTML, or something magic.
They just see resources.

So why should it matter to a client if a server is implemented differently?
They shouldn't care, that's the whole point of the Web's uniform interface.

How a server is implemented is a server's business,
so need to spec that, as these implementation details are local to a system.
Example: HTTP is spec'ed, so clients and servers need to know.
PHP+MySQL is not spec'ed, because clients couldn't care less.

> These properties are not internal, they are unique to the RDF data
> model and therefore global. We assume both servers and clients are
> based on RDF, either natively or indirectly.

"based on RDF" just means "serve RDF representations",
or if not, what does it mean?

> Yes, clients know they can POST RDF to containers, for example. But
> the request will likely be rejected, if they don't check what
> properties are required in the constraints.

Don't we have media types / profiles for that?

> That is why it is
> important for servers and clients to follow the same spec and
> interpret the description in the same way.

Idem.

> Any RDF serialization should be acceptable, we're not defining any new
> media types. We are using the W3C vocabulary for ACL [1].

Profiles then.

Ruben

Received on Tuesday, 24 June 2014 10:55:31 UTC