W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: Adjusting our spec names

From: Roberto Peon <grmocg@gmail.com>
Date: Mon, 2 Apr 2012 00:08:34 +0200
Message-ID: <CAP+FsNdSw8HMTcTmk7Z6VE9yXnQjZtMwGL1iEwQuSGG0-VPKuA@mail.gmail.com>
To: Poul-Henning Kamp <phk@phk.freebsd.dk>
Cc: Willy Tarreau <w@1wt.eu>, Mark Nottingham <mnot@mnot.net>, "<ietf-http-wg@w3.org> Group" <ietf-http-wg@w3.org>
On Sun, Apr 1, 2012 at 10:06 AM, Poul-Henning Kamp <phk@phk.freebsd.dk>wrote:

> In message <CAP+FsNdU7g6u_taJ94jwBkkQUCEKq=
> uYoysG-yT0Rt8zmZPqqg@mail.gmail.com>
> , Roberto Peon writes:
>
> >I can believe there is a different world ahead w.r.t. bandwidth, but
> seeing
> >massive RTT decreases would be surprising.
>
> Yes, what I meant was that the bandwidth/RTT ratio will change
> for the worse.
>
> >> None of the proposals we have seen so far is anywhere near being
> >> feasible in such a context.
> >
> >What is your basis for this statement? After all, your argument about
> >technology improvements in the future cuts both ways:
>
> My basis for this statement is a lot of experience in moving and
> processing data at high speed, and the lack of improvements in
> computers.
>
> If you look at the technological advances on the computer in the
> last ten years, there almost hasn't been any.
>
> We can add more transistors to a chip, but we can't make them run
> faster.
>
> What we have seen and what we will see is wider and wider NUMA
> running at the same basic 3-4 GHz clock.
>
> There are no solid-state physics advances that indicates we will
> see anything else the next 20 years.
>

Ah. My belief is that it is manufacturing methods that must change to
enable the next speed increases, not the transistor properties themselves
as cooling and size seem to be the big limiters. One could go 'clockless'
as well. Building a successful e to o substrate on a chip would yield big
changes. We could build 'up' not just out.

Personally, I wouldn't put money on a bet that chips won't become faster. I
agree, however, that chips that look strongly like the ones we have today
may not, and at some point in time speed-of-light will be the bottleneck,
but we're not there or close to it yet.


>
> In fact, we might not even see more and more transistors, as "extreme
> ultraviolet litography" appear to be a lot harder to tame in a
> semi-fab than anybody expected.
>
> You are right that we see, and will see a lot of offloading, I'm
> working with an ethernet controller now that has close to 1000 pages
> of datasheet, and for many features of the chip the description is
> at best summary.
>
> But offloading only gets you so far, because setting things
> up takes time.  That's why you don't see very much offloading in
> practice.
>
> To a first approximation, processing a transaction can be done in:
>
>        time = a + b * len
>
> and it is the 'a' constant that gets you in the offloading case.
>
> We saw this very clearly with crypto-offloading, for instance HiFns
> chips, where the cost of the setup was so high that in many cases
> it was faster to just do the crypto on the CPU instead.
>

Not having played with Hi/FNs stuff, I don't know where the problems lay,
but it sounds like there was an architectural screwup if that was the case.
Can you speak as to why the setup was so expensive? That just seems wrong
and bizarre.


>
> Right now everybody is scrambling to show that they can handle
> full packet load at 10Gb/s and 40Gb/s.  At 10Gb/s that's 14 million
> packets per second, and it's a serious challenge for most CPUs to
> do non-symbolic work to those packets.
>

>From what I've seen, the CPUs aren't the bottleneck, it is the architecture
of the IO subsystems of our OSs and often a lack of multiqueue on the NICs.


>
> Doing a single SSL stream at 1Tb/s should not really be a problem
> in hardware.
>

Agreed.

We're not that far off from being able to do it in software either. On a 12
core 3 Ghz system (which are available these days), you'd have to do ~28
bits/cycle on each core. Bus speeds of current architectures would likely
be the holdup.


>
> Doing 10 million 10kB HTTP transactions in a second is going to be
> a heck of a challenge, if nothing else for the memory footprint.
>
> >As for your first point... are you suggesting that we develop something
> >other than HTTP/2.0?
>
> No, I'm suggesting that we develop a HTTP/2.0 worthy of the '2' part
> of the name :-)
>

heh


>
> >I'd suggest that we should design so that this is possible, but... I think
> >that (charter aside) things which are fundamentally different from what
> >HTTP does today would be biting off more than we can chew right now :/
>
> Not really.
>

To be clear, my opinion is that we can lay the groundwork on which a change
of semantics is built (e.g. from request->response to more arbitrary
message<->message or message<->stream mappings), but that attempting to
define those semantics here and now will make it less likely for the entire
effort to succeed.

I also think that, if we could succeed at defining additional semantics as
seen from the endpoints, that it would be very valuable and useful... so
our disagreement is just in timing.


>
> If you look at what HTTP does it's actually very primitive, you can
> send messages consisting of:
>        {
>        verb (GET)
>        noun (URI)
>        routing info (Host:, session cookies)
>        metadata (other headers)
>        (body)
>        }
>
> and get replies on almost the same form:
>        {
>        result (200)
>        routing info (session cookies)
>        metadata
>        (body)
>        }
>
> The performance bottleneck is not going to be the endpoints, but rather
> the points along the path where traffic from many sessions
> is aggregated:
>
>        The telcos outgoing mobile->HTTP proxy
>        The load-balancer of $bigsite
>

imho, from the point of view of the client, the load-balancer of $big-site
or any other reverse-proxy is likely the endpoint as viewed by the client.
I do believe in making these as fast as possible without sacrificing what
the user and the content provider wants to do with the applications to
whatever the forward proxies and traffic mungers want to do.

>
> Both of those are essentially HTTP routers and they don't care about
> the metadata and the body, they only care about some of the "envelope"
> information, so they can find out where to send the stuff.
>
> Burying the envelope information in gzip or SSL would make make it
> patently impossible to route 1Tb/s HTTP traffic, but a well thought
> out design in HTTP/2.0 will make it possible to do both it in
> hardware or software.
>
> There is a lot of focus on the list right now with respect to
> compression and encoding to save bytes, but some of those saved
> saved bytes should be spent again, to make the protocol faster to
> process at the choke points in the network.
>
> One of the really big problems with HTTP/1.1 in that respect, is the
> lack of a session-concept, which people hack around with cookies.
>

 I prefer to think of it as a shared context because session has so many
other interpretations at this point, but yes, I agree. Willy's proposal has
parts/ideas which address that and which (as Willy knows) I think are
helpful and valuable.


>
> HTTP/2.0 should have real sessions, so that HTTP-routers don't have
> to inspect and mangle cookies.
>

Defining sessions may be ok, so long as discrimination against various
endpoints can be accomplished by intermediaries only when the user has
given them permission to do so. Given the often non-aligned interests of
any non reverse-proxy intermediary with the user, I think this is necessary
for the long-term health of the internet.

Perhaps the wg (i.e. we) should do an analysis of motivation and incentives
for forward/reverse proxy and other intermediaries? Understanding the game
theory would be helpful for everyone and would inform decisions that are
more likely to benefit everyone in the long-term.

It doesn't matter if my site and LB can do 1Tb/s if AT&T decides to
throttle my site because I'm not double-paying or otherwise bribing them to
let my packets through in something like a protection racket. To be clear
AT&T is proposing something like this (with much different wording, of
course) for mobile right now. I'd rather not rely upon the regulatory
environment for a good outcome on stuff like this...


> We should absolutely _also_ make the protocol faster at the endpoints,
> but the focus there is of a different kind, (Do we really need Date:
> at all ?) and there are other factors than speed in play (security,
> anti-DoS etc.)
>
> And we should also keep the HTTP/1.1 -> HTTP/2.0 issue in mind, but
> I don't think it is very important, because nobody is going to be
> able to drop HTTP/1.1 service for the next 10-15 years anyway, so
> if our HTTP/2.0 can wrap up a HTTP/1.1 message and pass it through,
> we're practically home.
>


I think it is important for the transition period, where people deploy
reverse proxies or loadbalancers which speak HTTP/2.0 to the client over
the large-RTT net segment and then speak HTTP/1.1 to their backends until
they eventually upgrade.

-=R


>
>
> --
> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
> phk@FreeBSD.ORG         | TCP/IP since RFC 956
> FreeBSD committer       | BSD since 4.3-tahoe
> Never attribute to malice what can adequately be explained by incompetence.
>
Received on Sunday, 1 April 2012 22:09:08 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:59 GMT