W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: Adjusting our spec names

From: Poul-Henning Kamp <phk@phk.freebsd.dk>
Date: Sun, 01 Apr 2012 08:06:17 +0000
To: Roberto Peon <grmocg@gmail.com>
cc: Willy Tarreau <w@1wt.eu>, Mark Nottingham <mnot@mnot.net>, "<ietf-http-wg@w3.org> Group" <ietf-http-wg@w3.org>
Message-ID: <19702.1333267577@critter.freebsd.dk>
In message <CAP+FsNdU7g6u_taJ94jwBkkQUCEKq=uYoysG-yT0Rt8zmZPqqg@mail.gmail.com>
, Roberto Peon writes:

>I can believe there is a different world ahead w.r.t. bandwidth, but seeing
>massive RTT decreases would be surprising.

Yes, what I meant was that the bandwidth/RTT ratio will change
for the worse.

>> None of the proposals we have seen so far is anywhere near being
>> feasible in such a context.
>
>What is your basis for this statement? After all, your argument about
>technology improvements in the future cuts both ways:

My basis for this statement is a lot of experience in moving and
processing data at high speed, and the lack of improvements in
computers.

If you look at the technological advances on the computer in the
last ten years, there almost hasn't been any.

We can add more transistors to a chip, but we can't make them run
faster.

What we have seen and what we will see is wider and wider NUMA
running at the same basic 3-4 GHz clock.

There are no solid-state physics advances that indicates we will
see anything else the next 20 years.

In fact, we might not even see more and more transistors, as "extreme
ultraviolet litography" appear to be a lot harder to tame in a
semi-fab than anybody expected.

You are right that we see, and will see a lot of offloading, I'm
working with an ethernet controller now that has close to 1000 pages
of datasheet, and for many features of the chip the description is
at best summary.

But offloading only gets you so far, because setting things
up takes time.  That's why you don't see very much offloading in
practice.

To a first approximation, processing a transaction can be done in:

	time = a + b * len

and it is the 'a' constant that gets you in the offloading case.

We saw this very clearly with crypto-offloading, for instance HiFns
chips, where the cost of the setup was so high that in many cases
it was faster to just do the crypto on the CPU instead.

Right now everybody is scrambling to show that they can handle
full packet load at 10Gb/s and 40Gb/s.  At 10Gb/s that's 14 million
packets per second, and it's a serious challenge for most CPUs to
do non-symbolic work to those packets.

Doing a single SSL stream at 1Tb/s should not really be a problem
in hardware.

Doing 10 million 10kB HTTP transactions in a second is going to be
a heck of a challenge, if nothing else for the memory footprint.

>As for your first point... are you suggesting that we develop something
>other than HTTP/2.0?

No, I'm suggesting that we develop a HTTP/2.0 worthy of the '2' part
of the name :-)

>I'd suggest that we should design so that this is possible, but... I think
>that (charter aside) things which are fundamentally different from what
>HTTP does today would be biting off more than we can chew right now :/

Not really.

If you look at what HTTP does it's actually very primitive, you can
send messages consisting of:
	{
	verb (GET)
	noun (URI)
	routing info (Host:, session cookies)
	metadata (other headers)
	(body)
	}

and get replies on almost the same form:
	{
	result (200)
	routing info (session cookies)
	metadata
	(body)
	}

The performance bottleneck is not going to be the endpoints, but rather
the points along the path where traffic from many sessions
is aggregated:

	The telcos outgoing mobile->HTTP proxy
	The load-balancer of $bigsite

Both of those are essentially HTTP routers and they don't care about
the metadata and the body, they only care about some of the "envelope"
information, so they can find out where to send the stuff.

Burying the envelope information in gzip or SSL would make make it
patently impossible to route 1Tb/s HTTP traffic, but a well thought
out design in HTTP/2.0 will make it possible to do both it in
hardware or software.

There is a lot of focus on the list right now with respect to
compression and encoding to save bytes, but some of those saved
saved bytes should be spent again, to make the protocol faster to
process at the choke points in the network.

One of the really big problems with HTTP/1.1 in that respect, is the
lack of a session-concept, which people hack around with cookies.

HTTP/2.0 should have real sessions, so that HTTP-routers don't have
to inspect and mangle cookies.

We should absolutely _also_ make the protocol faster at the endpoints,
but the focus there is of a different kind, (Do we really need Date:
at all ?) and there are other factors than speed in play (security,
anti-DoS etc.)

And we should also keep the HTTP/1.1 -> HTTP/2.0 issue in mind, but
I don't think it is very important, because nobody is going to be
able to drop HTTP/1.1 service for the next 10-15 years anyway, so
if our HTTP/2.0 can wrap up a HTTP/1.1 message and pass it through,
we're practically home.


-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.
Received on Sunday, 1 April 2012 08:06:43 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:59 GMT