Re: Getting (Officially) Started on HTTP/2.0

On Tue, Oct 2, 2012 at 6:54 PM, Amos Jeffries <squid3@treenet.co.nz> wrote:

> On 03.10.2012 05:48, Mark Nottingham wrote:
>
>> As you may have seen, our re-charter has been approved by the IESG.
>>
>> If you aren't familiar with the final text, please take a moment and
>> carefully read it:
>>   http://datatracker.ietf.org/**wg/httpbis/charter/<http://datatracker.ietf.org/wg/httpbis/charter/>
>>
>> First and foremost, we will continue working on the revision of the
>> HTTP/1.1 specification. Roy has recently finished his work on p1, and
>> should finish p2 soon. As such, we'll be entering WGLC on these
>> documents and prioritising any discussion on them until finished. Stay
>> tuned for details.
>>
>> Work on HTTP/2.0 will start by creating draft-ietf-httpbis-http2-00,
>> based upon draft-mbelshe-httpbis-spdy-00. That draft will list Mike
>> Belshe and Roberto Peon as authors, to acknowledge their contribution.
>>
>> However, we will have a separate editorial team in charge of the
>> Working Group's drafts. After extensive discussions and consultation
>> with our AD, I've asked Julian Reschke, Alexey Melnikov and Martin
>> Thomson to serve as editors of the HTTP/2.0 draft.
>>
>> Concurrently, we should start gathering issues against the draft for
>> discussion; just as with the previous work, I'd like to structure as
>> much of our discussion as possible around concretely identified
>> issues.
>>
>> I'll kick that process off by nominating obvious discussion points
>> like the upgrade mechanism, header compression, intermediaries, and
>> server push, but of course anyone can raise a new issue, using the
>> guidelines on our wiki page
>> <http://trac.tools.ietf.org/**wg/httpbis/trac/wiki<http://trac.tools.ietf.org/wg/httpbis/trac/wiki>
>> >.
>>
>> There are two things that we need to settle fairly quickly:
>>
>> 1. How we identify the protocol on the wire. It's likely that we're
>> going to have a few different revisions of what we specify implemented
>> as we move along, and I want us to be crystal-clear about how that
>> will be managed, so we're not stuck with interop problems down the
>> road.
>>
>> 2. What requirements we have for negotiation of HTTP2 within TLS. As
>> you should have seen, that portion of the work has been given to the
>> TLS Working Group, and we need to give them some guidance about what
>> we need.
>>
>>
> 3) The explicit session frames setup/teardown is a bit of unnecessary
> verbose state and bandwidth. Potentially adding extra RTT at times. There
> is no need for request and session setup to be different frames unless you
> want to throw away the stateless nature of HTTP. We can have A-to-B
> sessions[1] for free with per-hop request/flow ID and connection pinning
> support in the implementations. We can have client to origin session with
> negotiated headers.
>
> Where the client assigns a connection-specific ID and the server uses that
> to construct any session ID needed at its end. This was discussed by the WG
> earlier. Similar (but not the same) as described in network-friendly draft.
> WG discussion, and development since the drafts were submitted has led to a
> some significant improvements on this over what network-friendly and
> WebSockets use, and shown that it allows for seamless use of HTTP/2 as
> stateless or stateful flows with both server- or client-push as optional
> extensions.
>  eg we can write server-push as a separate (new) part-N section draft with
> the core pt1-2 HTTP/2 draft(s) containing a framing structure and flow
> control which are able to support server-push for when it is needed without
> breaking the HTTP/1 semantics requirements.
>
> I would like to propose an updated frame layout pulling in the best
> framing details of the WebSockets, network-friendly, and speed+mobility
> proposals. This is almost ready for WG discussion, given a few more days to
> work the wording of it into the draft-mbelshe-httpbis-spdy-00 texts. This
> also proposes a few details for your (1) below.
>
>
> 4) I'll stick my neck out and voice it. Cross-request LZ compression needs
> to go.
>
> The other drafts proposed a few alternative options there, per-header
> differential add/replace/remove flags.
> IIRC Robert was working on something there as well?


Yep-- what I've been doing is whole-key or whole-value delta-encoding with
static huffman coding, with an LRU of key-value pairs. A set of headers is
thus simply a set of references to the items in the LRU.
The set of operations is:
  add a new hey-value line into the LRU by specifying a new key-value
      this looks like:  {opcode: KVStore, string key, string val}.
  add a new key-value line into the LRU by referencing a previous
key-value, copying the key from it and adding the specified new value
      this looks like:  {opcode: Mutate,int lru_index, string val}.
  toggle visibility for a particular LRU entry for a particular header set
      this looks like:  {opcode: Toggle,int lru_index}.
  toggle visibility for a contiguous range of LRU entries for a particular
header set
      this looks like:  {opcode: Toggle,int lru_index_start, int
lru_index_end}.

Note that the actual format of the operations isn't exactly like what I'm
describing above- I'm just trying to indicate generally what is involved.

The resulting compression is a bit worse than gzip (with large window size)
on my current test corpus, but compares pretty well with gzip in the Chrome
implementation of SPDY.
It has CPU advantages in that the huffman encoding is static, thus for
proxies there is no re-encoding necessary. Additionally, much or all of the
decompressor state can be shared with a compressor (if proxying, for
instance).
Finally, I expect (though I've yet to prove it yet, as I'm still doing the
c++ implementation) that the compression is more CPU efficient than gzip.
Decompression should be similar... but.. much of the time you need not
reconstitute an entire set of headers-- instead, since we're sending deltas
anyway, you simply ammend your state based on what changed and thus become
more efficient there as well.

If clients/servers were a bit more naive in terms of when they
added/removed headers, the delta-coding would be more efficient and it'd
approach or exceed gzip compression.. at least I think so :)
As far as I (or thusfar anyone with whom I've spoken) can tell, the
approach here does not allow probing of the compression context, and is
thus robust in the face of known attacks.

Anyway, that is what I've been working on.
-=R



>
>
>
>  Following that, I suspect it'll be most useful to work on the upgrade
>> mechanism (which will also help with #1 above). Patrick sent out what
>> I think most people agree is a good starting point for that discussion
>> here: <http://www.w3.org/mid/**1345470312.2877.55.camel@ds9<http://www.w3.org/mid/1345470312.2877.55.camel@ds9>
>> >.
>>
>> We'll start these discussions soon, using the Atlanta meeting as a
>> checkpoint for the work. If its' going well by then (i.e., we have a
>> good set of issues and some healthy discussion, ideally with some data
>> starting to emerge), I'd expect us to schedule an interim meeting
>> sometime early next year, to have more substantial discussion.
>>
>> More details to follow. Thanks to everybody for helping get us this
>> far, as well as to Martin, Alexey and Julian for volunteering their
>> time.
>>
>> Regards,
>>
>> --
>> Mark Nottingham
>> http://www.mnot.net/
>>
>
>
> AYJ
>
>
>

Received on Wednesday, 3 October 2012 07:15:46 UTC