W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2009

Re: [OT] Re: #131: Connection limits (proposal)

From: Adrien de Croy <adrien@qbik.com>
Date: Tue, 20 Oct 2009 09:59:57 +1300
Message-ID: <4ADCD34D.2060805@qbik.com>
To: Jim Gettys <jg@freedesktop.org>
CC: "William A. Rowe, Jr." <wrowe@rowe-clan.net>, Mark Nottingham <mnot@mnot.net>, HTTP Working Group <ietf-http-wg@w3.org>

Jim Gettys wrote:
> We showed in the HTTP/1.1 paper that additional parallel connections 
> did not actually increase performance; fastest performance was 
> achieved with a single TCP connection.  But for many bad 
> implementations, doing so will, without the implementers having to 
> actually think or restructure their code.
depends on how you define performance.  Agreed for the case where you 
are making a single HTTP request to download a large resource.

But for a site with a large number of embedded images or parts (common 
nowadays for a home-page to result in well over 100 requests), then 
serializing requests + latency results in a poor user experience.  
Opening multiple connections and making concurrent requests greatly 
improves user experience.   That's why browsers do it.

I don't think browsers do it to increase throughput due to poorly 
structured code.  Or are you talking about download accelerators?



Adrien de Croy - WinGate Proxy Server - http://www.wingate.com
Received on Monday, 19 October 2009 20:56:31 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:52 UTC