W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Header Size? Was: Our Schedule

From: Jason Greene <jason.greene@redhat.com>
Date: Sun, 1 Jun 2014 15:13:33 -0500
Cc: "Jason T. Greene" <jgreene@redhat.com>, Roberto Peon <grmocg@gmail.com>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <491F2602-2B35-4F6E-BE12-B4044A261DF8@redhat.com>
To: Poul-Henning Kamp <phk@phk.freebsd.dk>

On Jun 1, 2014, at 2:33 PM, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote:

> In message <7F1A1E55-55D4-427D-BB1B-67CE4424BE26@redhat.com>, Jason Greene writ
> es:
> 
>>> The payload consists of metadata and objects
>>> 
>>> Metadata is the rest of the HTTP headers, which are not needed for
>>> transporting HTTP, but only play a role in semantic interpretation:
>>> Content-Type, Content-Encoding etc. etc.
>>> 
>>> Metadata and object can be compressed or encrypted how ever you like.
>> 
>> It still has to be limited to per-frame compression, because shared 
>> compression state means that a proxy must process and convert all 
>> metadata. 
> 
> It could be per-transaction compression, but that amounts to pretty
> much the same thing if we do our job well.
> 
> The main reason to have compression is to squeeze cookies.
> 
> I still think it is a much better strategy do away with cookies and
> all their problems (privacy, legal etc.) and skip compression with
> all its problems (DoS, state etc.) and get smaller headers than we
> would have with cookies, compressions and all their problems.
> 
> Sanitizing User-Agent: would be the next big gain (Again: privacy
> problems and all that.)
> 
> Doing a sensible static enumeration of all the RFC-headers could shave
> some bytes too.

Right the static substitution elements of HPACK would likely improve over http 1.1 decoding speed.

> 
>> As a server and proxy implementer I would prefer modest gains in packet 
>> size that didn't sacrifice throughput already achievable in http 1.1, 
>> and IMO the static tables do exactly that. I can certainly appreciate 
>> that those implementing clients don't agree with this perspective, and 
>> would like more.
> 
> As a proxy implementer, I think a HTTP/2.0 that cannot be processed,
> at the very least as a transparent proxy splitting the stream on
> Host: header, at todays COTS line-rates on todays COTS servers would
> be an utter embarrasment.
> 
> 8-cores, 40gbit/s -> 4x10gbit/s anyone ?
> 
> What *is* the highest rate anybody has processed HTTP/2.0 according
> to the current draft anyway ?
> 
> I don't recall seing anybody brag about that yet ?

It might be possible to keep the stateful delta operations and achieve http 1.1 throughput levels if the receiver, and not just the sender could negotiate the dynamic table size to 0. 

> 
> 
> -- 
> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
> phk@FreeBSD.ORG         | TCP/IP since RFC 956
> FreeBSD committer       | BSD since 4.3-tahoe    
> Never attribute to malice what can adequately be explained by incompetence.
> 

--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat
Received on Sunday, 1 June 2014 20:14:06 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC