W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: Adjusting our spec names

From: Poul-Henning Kamp <phk@phk.freebsd.dk>
Date: Sun, 01 Apr 2012 23:55:59 +0000
To: Roberto Peon <grmocg@gmail.com>
cc: Willy Tarreau <w@1wt.eu>, Mark Nottingham <mnot@mnot.net>, "<ietf-http-wg@w3.org> Group" <ietf-http-wg@w3.org>
Message-ID: <17522.1333324559@critter.freebsd.dk>
In message <CAP+FsNdSw8HMTcTmk7Z6VE9yXnQjZtMwGL1iEwQuSGG0-VPKuA@mail.gmail.com>
, Roberto Peon writes:

>> We saw this very clearly with crypto-offloading, for instance HiFns
>> chips, where the cost of the setup was so high that in many cases
>> it was faster to just do the crypto on the CPU instead.
>
>Not having played with Hi/FNs stuff, I don't know where the problems lay,

No screw-up, the HiFn crew were pretty smart people.

Its simply that the distance from a userland application that needs
crypto, down to the kernel and out to a PCI device takes a devastating
amount of time, relative to the actual crypto operation.

Footnote:

	VIA went and did what should be done: they added AES as a
	CPU instruction, exploiting politics the same way ZyXel did
	in the 1990ies, when they added 3DES to their modems.

	Rumours has it that Intel and AMD have CPU hardware have
	supported crypto-instructions for several years, but that
	the necessary microcode-update file is "Teserved for certain
	customers only."

What Intel now does with offloading into the ethernet-chips is
similar to what IBM did with the 3745 Mainframe Frontend back in
the 1980ies.  The words "desparate" and "futile" were used a lot
about it in the early 1990ies.

>> Right now everybody is scrambling to show that they can handle
>> full packet load at 10Gb/s and 40Gb/s.
>
>>From what I've seen, the CPUs aren't the bottleneck, it is the architecture
>>of the IO subsystems of our OSs and often a lack of multiqueue on the NICs.

Well, as I hinted above, the evolution of the IBM mainframe is very
enlightening here, we see the exact same order of developments in
the PC platform these days:

	More complex and specialized instructions are added
	More parallel execution units rather than faster EUs
	More and larger caches to cover speed disparity EU<-->RAM
	Virtualization to get better utilization of numerous EUs
	EUs move further and further away from I/O units
	I/O Units gain more and more processing power to compensate.

I think this may be a law of nature of sorts, a curse which tangle
up any successful machine architecture.  If you look at the Thumb2
instruction set, ARM seems to be headed down the same deroute.

So no, I don't put much trust in processing offload: It's complicated
to implement both in hardware and software , it has high latency
an the performance gain is only very slight.

But it will probably be done anyway.

The most we can do, is make the protocol sensible for it.

>To be clear, my opinion is that we can lay the groundwork on which a change
>of semantics is built (e.g. from request->response to more arbitrary
>message<->message or message<->stream mappings), but that attempting to
>define those semantics here and now will make it less likely for the entire
>effort to succeed.

I think the prudent thing would be to design the new
serializations/transports for "message<->message", to be future
compatible, and use them only for the subset functionality of
"request->response" for now.

>> The performance bottleneck is not going to be the endpoints, but rather
>> the points along the path where traffic from many sessions
>> is aggregated:
>>
>>        The telcos outgoing mobile->HTTP proxy
>>        The load-balancer of $bigsite
>>
>
>imho, from the point of view of the client, the load-balancer of $big-site
>or any other reverse-proxy is likely the endpoint as viewed by the client.

Yes, that's what makes it the bottleneck, but in protocol terms the
endpoint is where a request gets replied to with a response.

>> HTTP/2.0 should have real sessions, so that HTTP-routers don't have
>> to inspect and mangle cookies.
>
>Defining sessions may be ok, so long as discrimination against various
>endpoints can be accomplished by intermediaries only when the user has
>given them permission to do so.

I have read draft studies that showed that you could tell when
people where typing their password over a SSH connection, based on
the timing of the packets.  The timing pattern eliminated about 50%
of the entropy from the password typed, due to the constraints in
the QWERTY layout.

Privacy is a _lot_ harder than most people think.

I think we need to have a realistic view of what level of privacy
can be guaranteed, and I would like to see us move to a model
more like email, where you have a plain-text envelope used for
routing and the (optionally) privacy protected content.

I would argue, that for HTTP the request envelope consists of:
	request (GET/POST...)
	URI (maybe without the query part ?)
	Host:
	Session identifier (today a cookie)
	Length of transmitted body, if any.
For reponses the envelope is only
	status code (200,503,...)
	session identifier
	Length of transmitted body, if any.

This should make the "HTTP-router" people happy and shield them from
semantic interpretations.

>Perhaps the wg (i.e. we) should do an analysis of motivation and incentives
>for forward/reverse proxy and other intermediaries? Understanding the game
>theory would be helpful for everyone and would inform decisions that are
>more likely to benefit everyone in the long-term.

I second that.  It's quite clear that there are some divisions now
as evident in the "Mandatory TLS" vs. "You got to be kidding!"
factions.  Settling those "ideological" differences before we dive
into bits and bytes can and will save us time and conflicts down
the road.

I think at the end of the day, its rather simple though:

1) Clients should know if and when they have end to end privacy.

2) Clients should know when they have positive authentication
   of the server. (Can the client trust the server)

3) Servers should know when they have positive authentication
   of the client. (Can the server trust the client)

4) Servers should have access to means of client identification.
   (eg: For load-balancing, so all requests from the client end up
   on the same server)

5) Proxies should be able to enforce content-inspection, and
   possibly content-modification, but users should be made clearly
   aware that this happens (ref: 1)  In this case 2 & 3 applies
   to the client-proxy relationship.

6) Seen from the server-side, proxies come in "trustworthy" (server-side
   reverse proxy/http-router), "suspect" (3rd party CDN) and
   "untrustworthy" (forward/client side), with respect to their
   willingness to follow instructions from the server.
   (For instance about object validity lifetime or cacheability.)

>It doesn't matter if my site and LB can do 1Tb/s if AT&T decides to
>throttle my site [...]

The extended OSI model indicates that this is a layer 8 (politics)
problem.

Trying to solve it a lower layers doesn't work and should
not be attempted.

>I think it is important for the transition period, where people deploy
>reverse proxies or loadbalancers which speak HTTP/2.0 to the client over
>the large-RTT net segment and then speak HTTP/1.1 to their backends until
>they eventually upgrade.

That's one of the primary reasons why I want to keep HTTP/2.0
request->response for now.

It's important to realize that the 1.1 <-> 2.0 conversion does not
have to be 100% perfect in all aspects, it just has to be semantically
perfect, which is a much lower bar to clear.

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.
Received on Sunday, 1 April 2012 23:56:26 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:59 GMT