Re: I-D Action:draft-nottingham-http-pipeline-00.txt

On 10/08/2010, at 5:15 PM, Adrien de Croy wrote:

> Just playing devil's advocate here... does pipelining really deserve all this (large) effort?
> 
> To me it seems multiple TCP connections is better for several reasons:
> 
> a) orders of magnitude more simple to implement
> b) get true multiplexing without protocol changes to HTTP
> c) better supported with existing infrastructure (scalability issues apart, which is another issue)
> d) potentially better response wrt latency if you can request and start receiving an image or CSS before you've received all the HTML.

Yes; using multiple TCP connections isn't realistic when you're downloading fifty images across the pacific ocean. They don't share connection state, they all have to go through slow start, etc. This is all well-covered in the background material on SPDY.


> Most browsers seem to have already taken this path judging by the numbers of connections I see our proxy clients using.  They are doing this successfully now, whereas achieving a significant deployment of agents with these proposed changes is a long time away.

Define "long time." One browser vendor implementing measures like these can change things pretty quickly, at least for their users.


> Pipelining is up against a fairly big chicken and egg problem, as well as a non-trivial implementation complexity problem (esp for proxies with plug-in filters trying to detect if an upstream server supports pipelining or not).

Agreed; I've never liked using heuristics for this sort of thing.


> I also find it hard to favour a protocol change to cope with buggy servers and intermediaries (e.g. interleaved responses, dropped responses etc).  They should just be fixed.  

"They should just be fixed" hasn't worked for the past ten years. I'd like to try something else.


> Adding an effective request URI to every response is a significant traffic overhead (at least please make it MD5(URI))  URIs can be very very long (often several kB).

They can, but usually aren't for things that you're interested in pipelining (e.g., images, JS libraries, stylesheets).


> This also is just to find broken intermediaries, since a new server employing this would presumably fix its pipelining bugs first?  Therefore why burden every response for this task?

Fair question. I agree that the ability to opt in has the potential for solving most server-side problems, if the server admin is willing to debug and check their server. 

Are you suggesting that user agents basically don't pipeline at all when they discover a proxy (whether intercepting or configured)?


> A new administrative or content provider burden relating to maintaining information about likely benefits of pipelining seems to me well down on the list of things people want to worry about, and fraught with issues relating to authority and access to information. How can a server or human really know if once some content is deployed a pipelined request will truly provide advantage or not - it could be dependent on many unknowable factors, such as other server load etc.  Does a hosting site really want users fighting over who gets to put what meta tag in to try and get better responsiveness for their users?

Some people will certainly not be interested in this, but I know many (including, I suspect, my employer) who would be quite avidly interested in this.


> There are also some legitimate cases where content back needs to be generated by an intermediary, or diverted / requests re-written.  E.g. reverse proxies, payment gateways (e.g. hotels), corporate use policy challenge pages etc.  The server generating the response may never have seen the actual request made by the client.

Not sure where you're going here.


> I just think it's already been put in the too-hard basket by many implementors, and  they are just working around the perceived performance issues, so the opportunity for pipelining to provide real benefits is diminishing, compounded by cost of development.

I disagree. Many developers are keenly interested in deploying new approaches (e.g., HTTP-over-SCTP, SPDY) to work around the same problem set; if we can see benefits in the common case (downloading page assets) by tweaking/hinting HTTP and not making such fundamental changes, I think it's worth investigation. 


Cheers,


--
Mark Nottingham     http://www.mnot.net/

Received on Wednesday, 11 August 2010 04:16:31 UTC