- From: John Arwe <johnarwe@us.ibm.com>
- Date: Wed, 17 Sep 2014 14:54:31 -0400
- To: Benjamin Armintor <armintor@gmail.com>
- Cc: Austin William Wright <aaa@bzfx.net>, kjetil@kjernsmo.net, public-ldp-comments@w3.org
- Message-ID: <OFBFBEA357.3FCE8048-ON85257D56.0065333F-85257D56.0067E263@us.ibm.com>
> - The 2NN status might wreak less havoc with caches than alternatives, but I'm not confident it will be a lot less I think that's an interesting point. To date we've focused on the focused on the effects on the "application client" rather than intermediaries. I'll simply note a few things: 1: Paging is not dependent on 2NN, which is one reason that having it At Risk is "ok" with the working group. You could completely ignore 2NN, use 303 (paying the extra round-trip latency penalty), and every existing intermediary that behaved correctly before and that implements HTTP correctly (conforms) will still behave correctly, would be the working group's assertion. Counter-examples heartily welcomed. 2: Informally I think people have higher expectations for the correctness of cache implementations, and they expect that "anything that works for a client works for a cache [playing the role of HTTP client in an end to end flow]". Getting out my RFC-monger's microscope, [1] appears to cover this case when it says "a recipient MUST NOT cache a response with an unrecognized status code." FWIW, the equivalent statement is in the superseded RFC 2616 (section 6.1) as well. Any caches violating something in the RFCs for 15+ years would not receive much sympathy from me, personally. - I understand that RDF stores needn't provide predictable order for triples, but don't personally follow how a such a resource's triples could then be paged in any useful way, which seems to bring us back to content negotiation. Funny you should "ask", I had the same problem until recently (within oh the last 6 months) until I convinced Sandro to provide an example if it's so easy - and he did. I captured his example in [2]. The basic pattern is that *the server* has some algorithm for slicing the graph into pieces, and the breadcrumb for the boundary(ies) gets encoded in the server's page URLs ... the client does not, should not be, is not, aware of it. It just needs to be repeatable and to follow the guarantee of 6.2.7 etc. If you hark back to the assets/liabilities/advisors example in LDP, an equally viable strategy that a server could use is: page 1 = all the "net worth but not container" triples, 2 = assets, 3= liabilities, 4=advisors. Clearly that's artificial, but it's an illustration that once you see the pattern the server can probably find assumption it already relies on to make things work efficiently enough. Just as clearly, that does not suffice in cases where the client wishes to impose some ordering on the triples in the response - classic example, returning the list of a container's members sorted in some client-controlled manner; LDP/LDP Paging do not provide a standard way for the client to specify its order, so that has to be layered on top via extensions to LDP. Once an ordering known to the client can be imposed on the things, other interesting things happen - some of your comments on LDP Paging Ben were poking at exactly that aspect I think. [1] http://tools.ietf.org/html/rfc7231#section-6 [2] http://www.w3.org/TR/ldp-paging/#ldpr-impl Best Regards, John Voice US 845-435-9470 BluePages Cloud and Smarter Infrastructure OSLC Lead
Received on Wednesday, 17 September 2014 18:55:42 UTC