W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2010

Re: Fwd: I-D Action:draft-nottingham-http-pipeline-00.txt

From: Adrien de Croy <adrien@qbik.com>
Date: Tue, 10 Aug 2010 19:15:28 +1200
Message-ID: <4C60FC90.6080901@qbik.com>
To: Mark Nottingham <mnot@mnot.net>
CC: HTTP Working Group <ietf-http-wg@w3.org>

Just playing devil's advocate here... does pipelining really deserve all 
this (large) effort?

To me it seems multiple TCP connections is better for several reasons:

a) orders of magnitude more simple to implement
b) get true multiplexing without protocol changes to HTTP
c) better supported with existing infrastructure (scalability issues 
apart, which is another issue)
d) potentially better response wrt latency if you can request and start 
receiving an image or CSS before you've received all the HTML.

Most browsers seem to have already taken this path judging by the 
numbers of connections I see our proxy clients using.  They are doing 
this successfully now, whereas achieving a significant deployment of 
agents with these proposed changes is a long time away.

Pipelining is up against a fairly big chicken and egg problem, as well 
as a non-trivial implementation complexity problem (esp for proxies with 
plug-in filters trying to detect if an upstream server supports 
pipelining or not).

I also find it hard to favour a protocol change to cope with buggy 
servers and intermediaries (e.g. interleaved responses, dropped 
responses etc).  They should just be fixed.  Adding an effective request 
URI to every response is a significant traffic overhead (at least please 
make it MD5(URI))  URIs can be very very long (often several kB).

This also is just to find broken intermediaries, since a new server 
employing this would presumably fix its pipelining bugs first?  
Therefore why burden every response for this task?

A new administrative or content provider burden relating to maintaining 
information about likely benefits of pipelining seems to me well down on 
the list of things people want to worry about, and fraught with issues 
relating to authority and access to information. How can a server or 
human really know if once some content is deployed a pipelined request 
will truly provide advantage or not - it could be dependent on many 
unknowable factors, such as other server load etc.  Does a hosting site 
really want users fighting over who gets to put what meta tag in to try 
and get better responsiveness for their users?

There are also some legitimate cases where content back needs to be 
generated by an intermediary, or diverted / requests re-written.  E.g. 
reverse proxies, payment gateways (e.g. hotels), corporate use policy 
challenge pages etc.  The server generating the response may never have 
seen the actual request made by the client.

I just think it's already been put in the too-hard basket by many 
implementors, and  they are just working around the perceived 
performance issues, so the opportunity for pipelining to provide real 
benefits is diminishing, compounded by cost of development.



On 10/08/2010 1:39 p.m., Mark Nottingham wrote:
> FYI. I see this as the start of a discussion more than anything else.
> Cheers,
> Begin forwarded message:
>> From: Internet-Drafts@ietf.org
>> Date: 10 August 2010 11:30:02 AM AEST
>> To: i-d-announce@ietf.org
>> Subject: I-D Action:draft-nottingham-http-pipeline-00.txt
>> Reply-To: internet-drafts@ietf.org
>> A New Internet-Draft is available from the on-line Internet-Drafts directories.
>> 	Title           : Making HTTP Pipelining Usable on the Open Web
>> 	Author(s)       : M. Nottingham
>> 	Filename        : draft-nottingham-http-pipeline-00.txt
>> 	Pages           : 9
>> 	Date            : 2010-08-09
>> Pipelining was added to HTTP/1.1 as a means of improving the
>> performance of persistent connections in common cases.  While it is
>> deployed in some limited circumstances, it is not widely used by
>> clients on the open Internet.  This memo suggests some measures
>> designed to make it more possible for clients to reliably and safely
>> use HTTP pipelining in these situations.
>> This memo should be discussed on the ietf-http-wg@w3.org mailing
>> list, although it is not a work item of the HTTPbis WG.
>> A URL for this Internet-Draft is:
>> http://www.ietf.org/internet-drafts/draft-nottingham-http-pipeline-00.txt
>> Internet-Drafts are also available by anonymous FTP at:
>> ftp://ftp.ietf.org/internet-drafts/
>> Below is the data which will enable a MIME compliant mail reader
>> implementation to automatically retrieve the ASCII version of the
>> Internet-Draft.
>>> _______________________________________________
>>> I-D-Announce mailing list
>>> I-D-Announce@ietf.org
>>> https://www.ietf.org/mailman/listinfo/i-d-announce
>>> Internet-Draft directories: http://www.ietf.org/shadow.html
>>> or ftp://ftp.ietf.org/ietf/1shadow-sites.txt
>> --
>> Mark Nottingham     http://www.mnot.net/
Received on Tuesday, 10 August 2010 07:16:09 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:13:48 UTC