Re: HTTbis spec size, was: Rechartering HTTPbis

In message <4F23B745.3060002@gmx.de>, Julian Reschke writes:
>On 2012-01-28 09:41, Poul-Henning Kamp wrote:

>If you feel that any of what was added isn't needed then please follow up.

Ok, yes, let me follow up, which ironically is going to take
some length, because I probably have been a bit too brief
and summary during a stressed work-week.

As I said earlier:  I consider HTTP/1.1 a lost cause, and I think
the time spent trying to un-loose it has been wasted, except for
any lessons hopefully learned, about what not to do.

One of the main nightmares, if not THE fundamental mistake, of
HTTP/1.1 is the intermingling of transport and content.

The OSI protocols sucked, but there are some very tangible benefits
in sound layering which we hopefully should have learned from them.

The transport/content mixup is so fundamental in HTTP/1.1 that
most people don't even realize it, so here is a little quiz:

Which 3 tokens of this HTTP request is transport information:

	>>> GET / HTTP/1.1
	>>> Host: www.example.com
	>>> User-Agent: fetch libfetch/2.0
	>>> Connection: close

Answer: "HTTP/1.1", "www.example.com" and "close".

Everything else is content, including "GET".

The mixup happens because in HTTP/1.1 "GET" is also used to define
the transport semantics, for instance the precense/absence of object
bodies in the transaction.

That's just plain wrong: If you intend to send a body, there
should be a clear, dedicated transport indication of it.

HTTP is short for Hyper-Text TRANSPORT Protocol and while we cannot
do anything about the "Hyper-Text", (except be happy Tim didn't
come out with 10 years earlier: "Turbo-Text Tranport Protocol"
whould have been awful :-)  we can and should respect the TRANSPORT bit.

HTTP/2.0 should *only* be a transport protocol.

What the content is, and what its metadata makes it mean, is not a
transport issue and should be standardized separately from the
semantics of their transport across the net.

Separating transport from content does not have anything to do with,
nor does it affect or threathen the existence of caches, malware-scanners
or other value-add HTTP devices.  Those roles operate at the content
level, but like everybody else in the HTTP world, they have to move
their results with a transport protocol.

So cut to the bone, what HTTP/2.0 should be able to do is:

Need to have:
	1. Establish  connections
	2. Transmit unadultered objects each consisting of two 
	   transparent arrays of bytes: metadata + object
	3. Detect transmission failures.
	4. Be DoS resistant
Nice to have:
	5. Detect if HTTP/1.1 peer is HTTP/2.0 capable and upgrade.
	6. Multiplex multiple objects
	7. Pipeline multiple requests
	8. Optimize transmission wrt. bandwidth and latency. ("compression")
Off limits:
	9. Non-strict client-server model (security risk)
	10. Congestion control (That's TCP's job)
	11. Adaptation or conversion of objects (presentation, not transport)
	12. Maintain state across different connections (session, not transport)
	13. Authentication (session, not transport)
	14. Privacy/Security/Integrity (HTTPS/2.0, not HTTP/2.0)

Before anybody says "Heck I can do that in 3 pages", let me point
out that there are important and difficult technical issues to sort
out.

For starters, being DoS resistent (#4) is a lot harder than most
people think in a client-server scenario.

HTTP/1.1 interop (#5) needs some serious thought, because we don't
want to expend a RTT on it.  The easiest way is probably to make
the first transaction on a new TCP connection be HTTP/1.1 by default,
unless we already know that destination to be HTTP/2.0 able, in
which case the initial HTTP/1.1 transaction can be a costless
"UPGRADE".

We should not break the strict client-server model (#9), that would
just be asking for security holes and implementation complexity.

The "server push" people desire, can be simulated inside a strict
client-server model with multiplexing (#6) and transactions that
just never seem to end.

Providing security/integrity/privacy (#14) should go into a parallel
HTTPS/2.0 standard, which SHALL use the same fundamental "move
object+ metadata transparantly" model as HTTP/2.0, so that all the
RFC's about how to understand the moved objects don't have to care
about which transport is used.  The major trouble there is deciding
the issue of being able to route/load-balance HTTPS traffic one way
or the other.

If done right, along the lines above, HTTP/2.0 will be a big step
forward, in simplicity, efficiency and security, over HTTP/1.1's
transport part, and the speedup alone will give people plenty of
reason to upgrade.

SPDY offers a very credible bid on many of these points, but goes
far further than a transport protocol should in many other areas.

Therefore I belive that rubberstamping SPDY into HTTP/2.0 would
continue the most fundamental mistakes HTTP/1.1 suffers from.

For instance SPDY imposes structure on the object-metadata (2.6.7
HEADERS) and needlessly reimplements much of TCPs connection
management for each of the multiplexed "streams".

But if SPDY can do all that in 44 pages, I am only even more
convinced that a credible HTTP/2.0 can be done in 29 pages.

So get sharpen your pencils, and lets see some good proposals
for a Hyper-Text TRANSPORT protocol 2.0.


-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Saturday, 28 January 2012 10:29:12 UTC