Re: Suggestion for NEW Issue: Pipelining problems

On 7/23/07, Roy T. Fielding <fielding@gbiv.com> wrote:
> of the network.  Likewise, pipelining has huge benefits for
> specialized HTTP services, such as Subversion (when it uses it)
> and content management systems.

ra_serf (optional in Subversion 1.4+ and using the serf client
library) uses pipelining.

The pipelining problems I've seen in the real-world are:

- Not knowing how many requests this connection will be able to serve.
 The default in httpd is 100 requests per connection, but
svn.collab.net tuned it down to 10 - which caused all sort of fun when
testing ra_serf against Subversion's own repository.  I eventually got
them to up it back to the default.  serf tries to figure out the limit
heuristically (i.e. write as much as possible, then count how many
responses we got before the connection closed - that'll be the limit
going forward).

- Lost responses are, sadly, real.  Later versions of httpd 2.0.x
re-introduced a lingering close bug where it won't wait for the last
packet to be written; this is fixed in 2.2.x, but have to be accounted
for.

In general, what I settled upon is that serf remembers all in-flight
requests so that if we don't get a response back we can re-send them
if something happened (i.e. lost response or hitting the cap on the
number of requests on that connection).  It's really the only reliable
mechanism for dealing with pipelined requests.

Out of order pipelining would be nice for things like WebDAV, but a
bigger problem is that the WebDAV protocol requires specific methods
to be executed in lock-step using the results from the prior response
as input for the next request.  This is the largest protocol headache
I have with WebDAV at the moment.  Subversion would ideally like to be
able to use pipelining commits with WebDAV, but that'll likely require
us forking the protocol in some sense to let the underlying methods be
executed out-of-order as well.  -- justin

Received on Wednesday, 8 August 2007 19:20:46 UTC