W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

RE: Performance implications of Bundling and Minification on HTTP/1.1

From: Henrik Frystyk Nielsen <henrikn@microsoft.com>
Date: Tue, 26 Jun 2012 05:02:17 +0000
To: Mark Nottingham <mnot@mnot.net>, Roberto Peon <grmocg@gmail.com>
CC: William Chan (Dz) <willchan@chromium.org>, HTTP Working Group <ietf-http-wg@w3.org>, Howard Dierking <howard@microsoft.com>
Message-ID: <3605BA99C081B54EA9B65B3E33316AF7346F740B@SN2PRD0310MB396.namprd03.prod.outlook.com>
I do agree that there are serious questions as to what exactly multiplexing can and cannot solve. The purpose of multiplexing multiple sub-streams over a single reliable stream is to get a higher degree of responsiveness of each of the individual sub-streams. That is, the premise is that by interleaving the sub-streams, it is possible to make progress on each of the streams individually. However, this necessarily requires that each sub-stream gets a relatively small window in which to transmit data. If this window gets too large then only the active sub-stream will make progress and the other sub-streams will get blocked. Getting the window size and hence the degree of responsiveness right without penalizing throughput slowing down all the sub-streams requires a fair amount of information about network conditions, the relative importance of the sub-streams, and what gives the user the best experience for the given data. Not to mention that this has to happen between two arbitrary implementations.

The argument that the protocol magically can solve this problem for any content I simply don't think holds true so the question then becomes whether a more complex protocol inherently will make it easier to get the behavior right for any particular content. I am not saying that this is an impossible task but I think it is fair to say that this remains to be seen; and also whether it is to a degree that truly makes it worthwhile.

At the same time I think it is reasonable to point out that the use of optimizations such as bundling, minification, and compression are evolving and they can have as big if not bigger impact on the user experience than anything we can do at the protocol level. If there are things we can do to help these optimizations work better in practice then that would be great.

Roberto mentions that there are lots of challenges doing this today -- could we get these on the table and quantify them?

Henrik

-----Original Message-----
From: Mark Nottingham [mailto:mnot@mnot.net] 
Sent: Monday, June 25, 2012 20:34
To: Roberto Peon
Cc: Henrik Frystyk Nielsen; William Chan (Dz); HTTP Working Group; Howard Dierking
Subject: Re: Performance implications of Bundling and Minification on HTTP/1.1


On 23/06/2012, at 6:08 AM, Roberto Peon wrote:

> I'd argue another point.
> The amount of work necessary to optimize site performance for HTTP/1.1 today is large. The amount of knowledge necessary to do it properly is also large.
> This is not the way it should be!
> 
> The protocol should make it easier to do things right, and it should help in the (extremely frequent and likely) case that the site designer gets it wrong in little ways.

This is definitely an area that should be discussed. I've heard a few people express skepticism about multiplexing overall, because it requires the server to prioritise what's in the pipe, which in turn requires greater knowledge (and probably a bucketload of heuristics).

Right now those heuristics are applied to how browsers make requests, but at least the data is applied in the same place it's most usefully sourced, and of course there are fewer browser implementations than there are server deployments (which is potentially the level that this kind of tuning would need to take place for multiplexing).

Discuss :)

--
Mark Nottingham   http://www.mnot.net/







Received on Tuesday, 26 June 2012 05:02:59 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 26 June 2012 05:03:18 GMT