- From: Andy Davies <dajdavies@gmail.com>
- Date: Thu, 17 May 2012 10:11:33 +0100
- To: Matthew Wilcox <mail@matthewwilcox.com>
- Cc: WHATWG List <whatwg@whatwg.org>
Hi Matt, You really want to know what the throughput is rather than just the bandwidth and throughput is a bit of a PITA to work out in web conditions... Throughput is a mixture of available TCP connection time, bandwidth, latency and packet loss, etc. In theory you could measure it from the browser but there are a number of issues, here are some examples: A typical webpage is made up of many components, which are retrieved via short, bursty conversations between the browser and server. The initial TCP connections go though a 'slow-start' phase while the client and server determine what's the optimal number of packets that can be sent without being acknowledged. The number of packets inflight and the latency effective set a cap on the throughput. So measuring the resources that are downloaded first would probably under report the available throughput Multiple (sub-) domains confuse things further... Assume you can effectively measure throughput from the first resources to be downloaded (HTML, CSS etc), what happens if the images are on a different domain e.g. a CDN? The throughput that's just been measured isn't applicable to the CDN's domain. Caching further complicates things as you can't use anything that's in the cache to measure throughput (or can you?) Although slightly tangential it's worth having a read of Mike Belshe's "More Bandwidth Doesn't Matter (Much) - http://www.belshe.com/2010/05/24/more-bandwidth-doesnt-matter-much/ I'm not sure I'd agree with Tab's comment that authors aren't the best people to make decisions on what content should appear under different throughput conditions though. If they aren't I'm not sure who is? If it's the browser then the author still has to signal their intent to the browser so they are effectively making the choice. Cheers Andy @andydavies
Received on Thursday, 17 May 2012 09:12:09 UTC