Re: Adjusting our spec names

On Sun, Apr 01, 2012 at 08:06:17AM +0000, Poul-Henning Kamp wrote:
> In message <CAP+FsNdU7g6u_taJ94jwBkkQUCEKq=uYoysG-yT0Rt8zmZPqqg@mail.gmail.com>
> , Roberto Peon writes:
> 
> >I can believe there is a different world ahead w.r.t. bandwidth, but seeing
> >massive RTT decreases would be surprising.
> 
> Yes, what I meant was that the bandwidth/RTT ratio will change
> for the worse.

Yes the BDP is going to increase as bandwidth grows faster than RTT shrinks,
with a lower bound to RTT.

> >> None of the proposals we have seen so far is anywhere near being
> >> feasible in such a context.
> >
> >What is your basis for this statement? After all, your argument about
> >technology improvements in the future cuts both ways:
> 
> My basis for this statement is a lot of experience in moving and
> processing data at high speed, and the lack of improvements in
> computers.
> 
> If you look at the technological advances on the computer in the
> last ten years, there almost hasn't been any.
> 
> We can add more transistors to a chip, but we can't make them run
> faster.
> 
> What we have seen and what we will see is wider and wider NUMA
> running at the same basic 3-4 GHz clock.
> 
> There are no solid-state physics advances that indicates we will
> see anything else the next 20 years.

I agree. I predict that we'll see "smarter" processors, but not much
faster ones. Smarter is still something to be defined :-)

> In fact, we might not even see more and more transistors, as "extreme
> ultraviolet litography" appear to be a lot harder to tame in a
> semi-fab than anybody expected.
> 
> You are right that we see, and will see a lot of offloading, I'm
> working with an ethernet controller now that has close to 1000 pages
> of datasheet, and for many features of the chip the description is
> at best summary.
> 
> But offloading only gets you so far, because setting things
> up takes time.  That's why you don't see very much offloading in
> practice.

I was saying to Mike on thursday that for the last few years I've been
expecting that we'd see crypto offloaded into NICs at the datagram level.
That would be very cheap and would not bring processing overhead like
streaming crypto does, probably making datagrams more appealing in the
future.

> To a first approximation, processing a transaction can be done in:
> 
> 	time = a + b * len
> 
> and it is the 'a' constant that gets you in the offloading case.
> 
> We saw this very clearly with crypto-offloading, for instance HiFns
> chips, where the cost of the setup was so high that in many cases
> it was faster to just do the crypto on the CPU instead.
> 
> Right now everybody is scrambling to show that they can handle
> full packet load at 10Gb/s and 40Gb/s.  At 10Gb/s that's 14 million
> packets per second, and it's a serious challenge for most CPUs to
> do non-symbolic work to those packets.
> 
> Doing a single SSL stream at 1Tb/s should not really be a problem
> in hardware.
> 
> Doing 10 million 10kB HTTP transactions in a second is going to be
> a heck of a challenge, if nothing else for the memory footprint.

Hence the need to reduce the per-transaction processing cost.

(...)
> One of the really big problems with HTTP/1.1 in that respect, is the
> lack of a session-concept, which people hack around with cookies.
> 
> HTTP/2.0 should have real sessions, so that HTTP-routers don't have
> to inspect and mangle cookies.

I'd love to see this but this will never totally alleviate the need for
inspecting cookies since in practice we see several levels of sessions
in application infrastructures. I'm personally used to commonly see at
least two layers.

That said I'm working with Brian Carpenter and Sheng Jiang on a way to
use IPv6 flow label for this, but the more I think about it, the more
I think it would be useful to have a TCP extension to transport such
application session information (that's out of the scope of this WG).

> We should absolutely _also_ make the protocol faster at the endpoints,
> but the focus there is of a different kind, (Do we really need Date:
> at all ?) and there are other factors than speed in play (security,
> anti-DoS etc.)
> 
> And we should also keep the HTTP/1.1 -> HTTP/2.0 issue in mind, but
> I don't think it is very important, because nobody is going to be
> able to drop HTTP/1.1 service for the next 10-15 years anyway, so
> if our HTTP/2.0 can wrap up a HTTP/1.1 message and pass it through,
> we're practically home.

In fact what we precisely need is to be able to :
  - convert a 2.0 request into a 1.1 request
  - convert a 1.1 response into a 2.0 response
  - let 1.1 request pass unmolested

The need to convert 1.1 requests to 2.0 is much less important in my
opinion because most browsers will switch 2.0 on by default.

Regards,
Willy

Received on Sunday, 1 April 2012 09:01:24 UTC