Re: Link relation type to link to discover LDP

Hi Greg,

Thanks for taking the time to explain, I appreciate it!
Let's go straight to what I understand to be your main point:
the fact that responses from the TPF API are not spec'ed
to explicitly identify themselves as being such.

First of all, I agree with this observation:
the TPF API indeed does not explicitly identify itself as such.
So the question seems to be:
should or shouldn't the TPF API identify itself, and if so, how?

To start with the "how" part: do you have suggestions?
For instance, I could imagine something like:
    <> a ex:TriplePatternFragment.
But such a triple does not say something about the entire API
(and I don't know how to refer to “the entire API” in RDF,
 because this API consists of all of its resources).
Perhaps a concrete suggestion could make it easier to argue.

Then, is it needed that the TPF API identifies itself?
For me, simply saying "this is a TPF API" is what I'd like to avoid,
since–as I advocated previously [1]—a TPF API is not the final answer.
I.e., it is not the only API to Linked Data that we will ever use.
Instead, the idea behind Linked Data Fragments is to define API features.
For example:
– this API offers triple-pattern access to the dataset
– this API offers dataset summaries
– this API offers full-text search
– this API offers sorting
– …
That way, publishers can decide how far they want to go,
and it is up to the client to dynamically discover this.
Can we have identifiers for all of the combinations of features? Likely not.
Can we have identifiers for the individual features? Maybe, yes.

But these identifiers kind of defeat the purpose of hypermedia.
I.e., identifiers send the message “this is what the thing is”.
But then you need to understand as a client how to deal with the thing.
The idea of hypermedia is to do this in-band:
you explicitly say what the interface supports.
So in the case of TPF, we say "you can search by triple pattern".
In the case of full-text, we would say "you can search literals by substring".
In the case of sorting, we would day "you can sort the dataset in these ways".
Compare this to saying:
"I am a TPF interface." – "I am a full-text interface" – "I am a sort interface".

So instead of identifying—and requiring clients to understand those identifiers—
I want to tell clients what they can do (and how) in the response.
Therefore, I would personally rather focus on perfecting the description mechanism,
rather than putting an identifier in the response.
While an identifier helps identifying an interface, the goal of clients seems something else:
finding out what they can do with the interface, and how—regardless of its type.

> But you still think that using a custom variable representation should be the one and only way that a client should identify the sort of API that is being described? Do you believe that no two APIs should ever share the same set of parameters and variable representation rules?

Certainly not.
It is my intention that the API communicates in RDF:
“you can search the dataset by triple pattern”.

That way, if another API does something different with the same input,
clients will recognize and interpret that.
For instance, another API might say
"you can generate a JPEG with this triple pattern” (just saying).

Do these APIs share the same representation and parameters? Yes.
Are their descriptions the same? No, because the second API cannot state
that it will search a dataset by triple pattern.

So compare hypermedia controls
– “you can search the dataset by triple pattern”
– "you can generate a JPEG with this triple pattern”
to identifiers
– "I am a TPF API"
– "I am a TP-JPEG API"
Both allow to distinguish, but I would argue the former say much more.

> Because this is exactly the sort of case that causes duck typing to fail.

But duck typing should not be used to find out what type something has.
I.e., “it quacks like a duck, so it must be a duck” is a fallacy,
but “this thing supports quacking, and I just want to quack” works.

And this is what I believe intelligent clients should do. I imagine a dialog:
S: Here are my hypermedia controls.
C: Aha, I can search the dataset by triple pattern, that's what I wanted.
or
S: Here are my hypermedia controls.
C: Aha, I can generate a JPEG with the triple pattern, I don't want that.

Whether or not the first interface is TPF or something more advanced
is really orthogonal to the fact that the client wants to search by triple pattern.

> But "you can search by triple pattern” doesn’t imply everything that TPF *is*. Otherwise you wouldn’t need a 3500 word spec to define it.

The spec is for implementers, to ensure that clients of this interface don't need the spec :-)
(At least that is my aim.)

> I’m suggesting that clients will need to understand TPF anyway, if they intend to use the API successfully. 

It really depends what type of client it is.
If it is a SPARQL evaluator like client.linkeddatafragments.org, sure.
If it is a generic Hydra client, less so.

My vision of future APIs is one without specs.
I.e., a client asks a server: what do you have?
The server says: “you can execute SPARQL queries
with up to 2 triple patterns and 1 filter”.
That seems so much more flexible and interesting than
”I implement the SPARQL-API-2TP-1F spec”.

When there is only one interface, like TPF,
it's hard to argue this point. But as more features arrive
(and we are working on that), I think this changes.

> Unless, as I questioned above, you are suggesting that no other API should ever use the same parameters and encoding rules as TPF.

They totally might, as long as they don't state that
they filter the dataset by triple pattern (if that's not what they do).

> That all sounds good, but I’m afraid those are all tangential to my main point which we still seem to disagree about.

Alright, I hope this mail gets to your main point,
and that the things I wanted to convey make sense.
If not, please continue sending your feedback :-)

Thanks,

Ruben

[1] http://www.slideshare.net/RubenVerborgh/querying-datasets-on-the-web-with-high-availability/19
(This slide I added BTW because of a nice discussion with Greg and Sandro.)

Received on Wednesday, 17 June 2015 15:18:35 UTC