Re: Comments on the Triple Patterns Fragments draft

On Monday 21. July 2014 15.15.32 Ruben Verborgh wrote:
> Hi Kjetil,
> 
> Thanks for your comments!

You're welcome! :-)

> Referring to all 6 of them seems rather cumbersome
> when I just want to refer to HTTP 1.1 collectively.
> 
> Any thoughts on that?

Yeah, I agree, cumbersome... Perhaps just ask the httpbis group?

> i.e., explicitly specifying that Section 4 is for HTTP,
> which allows us to specify other interfaces later on.

Right, that could work. 

So, actually, what I had in mind is that you could implement a triple store 
on top of TPFs... Just point it at a URI, and the store API 
(RDF::Trine::Store, Jena, Sesame SAIL, etc) could figure out if it could use 
it and then configure itself.

Once that's done, it could be interesting to use a non-HTTP protocol and use 
Andy Seaborne's RDF Binary using Trift:
https://github.com/afs/rdf-thrift/blob/master/rdf-binary-thrift.md
(lighter in weight than HDT)... Just speculating still, but I think it is an 
interesting direction.


> On the other hand, we might want to keep things simple
> and just assume the regular Web protocol.
> 
> What do others think?

/me shuts up and looks around the room :-)

> > 3) I really don't understand why it is required that CORS is given with
> > a wildcard…?
> 
> Because we actually ran into this problem [2].
> I had successfully tested the KBOdata triple pattern fragments server [3]
> against a triple pattern fragments client running on a test server A,
> and when I deployed it on production server B, it failed,
> because the browser had cached the fragments with a CORS header for A.

Right!

> The other option is "Vary: Access-Control-Allow-Origin" I presume.

Actually, I have experience with this, what does it do?

> > You're creating a huge minefield here
> > when user credentials are involved: Surely, TPFs are not intended just
> > for public resources?
> 
> Aren't there better ways to deal with authentication?

My general impression is that it is "strong ways, ways that users will 
actually use, ways that developers will actually use, choose any two" ;-)

> You're free to do that.
> But why make this a special case?

Because I'm not quite sure how I would do it... :-) A 400 to say that "it is 
an error to try to download all my data this way" with Location header, is 
that even allowed? Or a 301 perhaps to say that they shouldn't use that URI 
again but not interrupt their flow, or perhaps a 303 since the data dump may 
not have control information like the fragment, or...?

> Actually, that fragment *is* simply the data dump.

Well, plus metadata and control, so not exactly... but still.
 
> > My plan was to just throw a 400 in the first release, but subsequently
> > put in a Location header that points to the data dump. A redirect
> > might be more appropriate.
> 
> Exactly my thoughts; just a redirect would be nice.
> But then again, is this case so bad?
> 
> Mostly, ?s ?p ?o will be asked for the metadata (or for the controls!);
> just providing the first 100 triples and no paging could also be fine.
> (Clients simply won't be able to solve * queries then.)

Right, but that could make it somewhat unpredictable where the limits are, 
like the LIMIT 10000 used in most SPARQL endpoints. Besides what do you mean 
by "first 100 triples", there is generally no concept of order in my RDF 
(which is a reason I don't quite like the paging concept either, but I have 
no better suggestion).

> > I don't support paging for now. :-) Anyway, I think it would be
> > interesting to deal with what I suspect to be a rather common
> > situation, that a certain triple pattern is a resource with another
> > URI already.
> 
> Could you clarify this?

Yeah, some will have implemented their Linked Data as something that returns 
{ Request-URI ?p ?o . } so in this case, it would be natural that this 
fragment is just a redirect to Request-URI.

Cheers,

Kjetil

Received on Monday, 21 July 2014 21:12:37 UTC