W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Our Schedule

From: Michael Sweet <msweet@apple.com>
Date: Mon, 26 May 2014 12:08:03 -0400
Cc: Greg Wilkins <gregw@intalio.com>, Mark Nottingham <mnot@mnot.net>, HTTP Working Group <ietf-http-wg@w3.org>
Message-id: <85C86C4E-BE3C-47AE-9A78-A4E439F04B8A@apple.com>
To: "Richard Wheeldon (rwheeldo)" <rwheeldo@cisco.com>
Richard,

I think the original SPDY white paper has some of this information, although I wasn't able to find all of the numbers broken out:

	http://dev.chromium.org/spdy/spdy-whitepaper

Table 3 pretty clearly shows the effect of latency on page load time. (Table 1 also, although you have to make assumptions about DSL being higher latency than Cable)  TCP connections basically account for 1.5 RTTs (SYN, SYN-ACK, ACK) and TLS handshakes are typically 2 RTTs.  For some really quick, basic benchmarking, here is what I see via my 80Mbps cable Internet service (which I regularly get the advertised bandwidth but generally have higher RTTs thanks to my geographic location) for loading the primary content of each of the named sites via HTTPS:

  Site                  Total   RTT     Conn    Resp Headers  Response HTML  gzip HTML (est)
  --------------------  ------  ------  ------  ------------  -------------  ------------
  www.apple.com          436ms    79ms   276ms     227 / 3ms  13066 / 157ms   3200 / 38ms
  www.cups.org           663ms   120ms   420ms     205 / 7ms   7803 / 236ms   2979 / 90ms
  www.google.ca          425ms    29ms   101ms    792 / 20ms  12295 / 304ms  4808 / 118ms
  www.msweet.org         711ms   104ms   364ms    217 / 13ms   5836 / 334ms  2119 / 121ms
  www.pwg.org            482ms    96ms   336ms     272 / 3ms  14606 / 143ms   4001 / 39ms

And then a separate run for HTTP:

  Site                  Total   RTT     Conn    Resp Headers  Response HTML  gzip HTML (est)
  --------------------  ------  ------  ------  ------------  -------------  ------------
  www.apple.com          224ms    81ms   121ms     227 / 2ms  13151 / 101ms   3206 / 24ms
  www.cups.org           441ms   132ms   198ms     205 / 7ms   7803 / 236ms   2979 / 90ms
  www.google.ca           89ms    26ms    39ms     791 / 4ms   12210 / 46ms   4790 / 18ms
  www.msweet.org         223ms    88ms   132ms     217 / 4ms    5836 / 87ms   2119 / 31ms
  www.pwg.org            212ms    93ms   139ms     272 / 2ms   14606 / 71ms   4001 / 19ms

For all of these sites I measured the average ping, connect/TLS handshake, and GET times, and then looked at the returned headers and data.  I did not specify "Accept-Encoding: gzip" in the requests, so the sizes are uncompressed HTML.  The "gzip" column shows the compression that gzip would provide on the HTML (and estimated transfer time).  I did not measure the time to transmit the HTTP GET request, which did not include any of the usual cookie headers (which for each of these sites appears to be up to about 1k worth of cookies).

While I understand I am not measuring the whole page load time (including dependent resources, which are often many times the size of the initial HTML document), these numbers *do* show the per-connection overhead and provide an easy comparison to response header and page content times and sizes.  Headers certainly appear to be in the noise, with significant improvement being possible simply by multiplexing multiple requests over a single connection and compressing the message body.

Now, I can understand the temptation to want to squeeze every last percent of bandwidth reduction possible and have the optimum protocol for web browsing, etc., but ordinary users are probably not going to notice the few milliseconds that get shaved off by compressing headers.  They definitely *will* notice whole second improvements in page load times, which is what multiplexing and message body compression will yield.


On May 26, 2014, at 9:00 AM, Richard Wheeldon (rwheeldo) <rwheeldo@cisco.com> wrote:

> Iím curious as to what data youíre basing this on? Iím not saying youíre wrong Ė just looking for evidence. In particular data or results that I could use to sanity test our own implementations,
>  
> Richard
>  
> From: Greg Wilkins [mailto:gregw@intalio.com] 
> 
> My experience from SPDY is that we are going to get most of the gains from multiplexing, reduced round trips and from push
>  

_________________________________________________________
Michael Sweet, Senior Printing System Engineer, PWG Chair




Received on Monday, 26 May 2014 16:08:38 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:26 UTC