W3C home > Mailing lists > Public > public-ldp-wg@w3.org > November 2013

Re: paging in editor's draft

From: John Arwe <johnarwe@us.ibm.com>
Date: Mon, 4 Nov 2013 09:09:45 -0500
To: public-ldp-wg@w3.org
Message-ID: <OFC9625BEA.D5AFC43A-ON85257C19.004A8F15-85257C19.004DCCA6@us.ibm.com>
> >>> to me, the whole idea that there is some "natural order" to a
> >>> collection leads to all kinds of false assumptions. a collection is 
a set

I think we are talking about subtly different things here.

My original statement was

> I do think it somewhat likely that servers are likely to have some 
> "natural order" that they expect clients to traverse in, probably 
> specific to the resource and/or its type; to some degree this aligns

That is a statement about server optimizations, not client interface. 
Server implementations optimize for the expected client interactions. 
General clients cannot depend on it in the absence of other information, 
and LDP today is mostly silent about what the other information is; the 
exception is sorting, but that is still "weak" information.

It's a wee bit odd to disclaim all notion of order; even 5005 talks about 
this in section 3.  If the link relation names (first, next, previous, 
last) aren't suggestive enough of a natural order, their descriptions 
repeatedly refer to 'a series of documents'.  Its few two paragraphs warn 
about reading too much into them, granted.

LDP sorting only describes the relationship between (sub)sets (wrt the 
entire collection) of members on *different* pages, it does not say 
anything about the order of members on a single page (nor would saying 
that really make much sense when dealing with RDF). 

LDP gives servers a way to expose the sort order that was used (that 
"influenced" the allocation of member triples to pages, if you prefer), 
based on the collection as it existed at a point in time.  A client will 
not be able to retrieve another page until it has received a response to 
some earlier retrieval request (that's the only way it finds out the other 
page URLs), so (by definition) clients will access pages at different 
points in time and therefore (absent some additional out of band 
information) they can't make any strong assumptions about the "collection 
as a whole".

Could certain *server implementations* make stronger guarantees, along the 
lines of what database folks call cursors?  Sure.  Is LDP currently 
attempting to require cursor-like behaviors?  No (not intentionally, at 
any rate).

Could clients "learn more", using existing standard mechanisms?  Of 
course.  E.g. if a client sees that a collection's etag did not change 
between the time it retrieved the first and last pages, then it knows that 
Ashok's "insert in the middle" case did not occur.  Does this cost the 
client extra requests?  Absolutely.

I've had people building internal protocols with stronger guarantees than 
LDP has - at least they tried it.  They found their "general" client 
quickly imploded under its own assumptions once it had to deal with more 
than one server implementation, and once they re-oriented their thinking 
that this is a distributed system with asynch requests they were much 
happier.  They had a bit more to deal with on the client side, and they 
had to just cope with not being able to remove certain timing windows 
"upstream", but they got over it.

Best Regards, John

Voice US 845-435-9470  BluePages
Tivoli OSLC Lead - Show me the Scenario
Received on Monday, 4 November 2013 14:28:11 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:17:46 UTC