W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: Performance implications of Bundling and Minification on HTTP/1.1

From: 陈智昌 <willchan@chromium.org>
Date: Mon, 25 Jun 2012 22:49:19 -0700
Message-ID: <CAA4WUYgfLaAcRrxr0O7Vy_NMQeugMty8pwMK1aThav0hSMpbJA@mail.gmail.com>
To: Henrik Frystyk Nielsen <henrikn@microsoft.com>
Cc: Mark Nottingham <mnot@mnot.net>, Roberto Peon <grmocg@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, Howard Dierking <howard@microsoft.com>
On Mon, Jun 25, 2012 at 10:02 PM, Henrik Frystyk Nielsen <
henrikn@microsoft.com> wrote:

> I do agree that there are serious questions as to what exactly
> multiplexing can and cannot solve. The purpose of multiplexing multiple
> sub-streams over a single reliable stream is to get a higher degree of
> responsiveness of each of the individual sub-streams. That is, the premise
> is that by interleaving the sub-streams, it is possible to make progress on
> each of the streams individually. However, this necessarily requires that
> each sub-stream gets a relatively small window in which to transmit data.
> If this window gets too large then only the active sub-stream will make
> progress and the other sub-streams will get blocked. Getting the window
> size and hence the degree of responsiveness right without penalizing
> throughput slowing down all the sub-streams requires a fair amount of
> information about network conditions, the relative importance of the
> sub-streams, and what gives the user the best experience for the given
> data. Not to mention that this has to happen between two arbitrary
> implementations.
>

Just to doublecheck, are you asserting that you need a complicated frame
sizing implementation in order to get good performance out of multiplexing?
In Chromium, we don't do much complicated for SPDY frame sizes. We clamp to
2*MSS. Please refer to:
http://code.google.com/searchframe#OAMlx_jo-ck/src/net/spdy/spdy_session.h&l=41
http://code.google.com/searchframe#OAMlx_jo-ck/src/net/spdy/spdy_session.cc&exact_package=chromium&l=634

I am assuming when you say window you mean frame, and you're not actually
trying to use flow control windows to achieve responsiveness. Pardon me if
I am misinterpreting your terminology. If you actually are talking about
flow control windows, then we ought to have a separate discussion thread,
as that's a whole other can of worms you're opening up :)


> The argument that the protocol magically can solve this problem for any
> content I simply don't think holds true so the question then becomes
> whether a more complex protocol inherently will make it easier to get the
> behavior right for any particular content. I am not saying that this is an
> impossible task but I think it is fair to say that this remains to be seen;
> and also whether it is to a degree that truly makes it worthwhile.
>

Can you clarify which problem you're referring to? Do you mean frame
sizing? If so, I think I've shown the simplicity of Chromium's SPDY
algorithm above. If you meant prioritization of substreams, here's our
algorithm:
http://code.google.com/searchframe#OAMlx_jo-ck/src/content/browser/renderer_host/resource_dispatcher_host_impl.cc&exact_package=chromium&type=cs&l=206.
As you can see, it's a simple switch statement. Can we do better? Very
possibly!


>
> At the same time I think it is reasonable to point out that the use of
> optimizations such as bundling, minification, and compression are evolving
> and they can have as big if not bigger impact on the user experience than
> anything we can do at the protocol level. If there are things we can do to
> help these optimizations work better in practice then that would be great.
>

I agree that application level improvements have a bigger impact than
protocol level improvements. But I think that it'd be great for the
protocol to eliminate deficiencies for which application developers have to
work around (e.g. domain sharding). There are so many web performance
guidelines that developers need to keep in mind...the less that they have
to do to work around protocol deficiencies, the better.


> Roberto mentions that there are lots of challenges doing this today --
> could we get these on the table and quantify them?
>
> Henrik
>
> -----Original Message-----
> From: Mark Nottingham [mailto:mnot@mnot.net]
> Sent: Monday, June 25, 2012 20:34
> To: Roberto Peon
> Cc: Henrik Frystyk Nielsen; William Chan (陈智昌); HTTP Working Group; Howard
> Dierking
> Subject: Re: Performance implications of Bundling and Minification on
> HTTP/1.1
>
>
> On 23/06/2012, at 6:08 AM, Roberto Peon wrote:
>
> > I'd argue another point.
> > The amount of work necessary to optimize site performance for HTTP/1.1
> today is large. The amount of knowledge necessary to do it properly is also
> large.
> > This is not the way it should be!
> >
> > The protocol should make it easier to do things right, and it should
> help in the (extremely frequent and likely) case that the site designer
> gets it wrong in little ways.
>
> This is definitely an area that should be discussed. I've heard a few
> people express skepticism about multiplexing overall, because it requires
> the server to prioritise what's in the pipe, which in turn requires greater
> knowledge (and probably a bucketload of heuristics).
>
> Right now those heuristics are applied to how browsers make requests, but
> at least the data is applied in the same place it's most usefully sourced,
> and of course there are fewer browser implementations than there are server
> deployments (which is potentially the level that this kind of tuning would
> need to take place for multiplexing).
>
> Discuss :)
>
> --
> Mark Nottingham   http://www.mnot.net/
>
>
>
>
>
>
>
Received on Tuesday, 26 June 2012 05:49:51 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 26 June 2012 05:49:57 GMT