W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Review: http://www.ietf.org/id/draft-mbelshe-httpbis-spdy-00.txt

From: Patrick McManus <pmcmanus@mozilla.com>
Date: Wed, 29 Feb 2012 12:15:32 -0500
To: Amos Jeffries <squid3@treenet.co.nz>
Cc: ietf-http-wg@w3.org
Message-ID: <1330535732.2182.290.camel@ds9>
Hi Amos,

> Challenge: Implement a SPDY proxy which simply decompresses then 
> recompresses. Pass each request through 1-3 layers of this proxy. 
> Measure the gains and CPU loadings.

I'm going to start by saying that while computational scalability of all
parts of the ecosystem (server, intermediary, browsers, embedded clients
of other sorts, etc..) is important and must be kept within reasonable
limits, it is not the top priority for me in doing transport design for
the web.

The most important thing is enabling a better user experience (and
opening up new ones) over a network where bandwidth, cpu, memory, etc
all keep scaling up but latency doesn't operate on the same scale. our
current strategies butt their head into these things all the time
whether it is just delay of a handshake or delay in the ability to
respond/sense congestion.

I'm genuinely interested in what other priorities people see, but when I
look at the problems in web transport I don't start with "per
transaction cpu cost is holding back the web." But I do see that mobile
network characteristics makes that form factor down right unusable
sometimes even as the capabilities of the handset evolve rapidly. Add to
that the traditional web screws up VOIP, and that the lack of
cancellation semantics interacts with the connection mapping of http/1
in a way to create very unresponsive interfaces, and that simple tasks
like pushing calendar updates are fraught with challenges involving
timeouts and security, etc.. Getting past those road blocks are the
things that keep me up a night.

Making it cheaper to operate implementations of all types is awesome -
let's do it where we can - but it isn't the most heavily weighted
priority for me.

So let's address those transport things and evolve what people can do
with the web in an open standards process. Spdy is a good start there
from my point of view - some parts of it have significant positive
experience (e.g. compression) which is worth a lot to me, others are
less tested and should be scrutinized harder (e.g. certificate

To bring this back to compression - I just took a set of 100 compressed
real headers, and passed them through a decompress/recompress filter
1000 times in 350 milliseconds on one core of a rather unimpressive i5.
Spdy would do it faster because it tends to window things smaller than
the default gzip. So that's a cpu overhead of .35ms per set of 100. The
headers were reduced from 44KB (for the set of 100) to about ~4KB.
That's probably a reduction from 31 packets to 3. IW=4 means that's a
difference of 3 rtt's of delay to send 31 packets uncompressed vs 0
delay to send 3 compressed. 

rtt varies a lot, but let's call that 300 ms of latency saved at the
cost of .35ms of cpu. Its a trade off to be sure, but imo the right one
for the net.

Other schemes are plausible (e.g. per session templates that can be
referenced per transaction) and I'm very open minded to them - but I
wanted to be clear that I haven't seen any problems with this one
accomplishing its objectives. I think its biggest weakness (though
tolerable) is that it creates a state management issue which causes some
classes of spec violations to require connection termination instead of
being localized to the transaction.

Received on Wednesday, 29 February 2012 17:16:16 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:00 UTC