W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2014

Re: #541: CONTINUATION

From: Poul-Henning Kamp <phk@phk.freebsd.dk>
Date: Tue, 08 Jul 2014 15:33:51 +0000
To: Roberto Peon <grmocg@gmail.com>
cc: Jason Greene <jason.greene@redhat.com>, Johnny Graettinger <jgraettinger@chromium.org>, Mike Bishop <Michael.Bishop@microsoft.com>, Mark Nottingham <mnot@mnot.net>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <81660.1404833631@critter.freebsd.dk>
In message <CAP+FsNfLpwxpMg2kKqw3GyFmJqqm2BB_UyKhFkKc39aGS5kM_A@mail.gmail.com>, Roberto Peon wri
tes:

Thanks for giving us a taxonomy of issues Roberto, even though
some of them tangles, it does add structure.


>1) declaring a limit on the compressed header size

I would s/a/the/ because without such a limit there is no DoS
resistance, and all implementations I know have such a limit, one
way or the other.

The question is if we tell peers only when they execeed it or also
as a interop-parameter up front.

I think there is a value in declaring it, it would make some things
easier, for instance server side proxies can autoconfigure to their
backends.

The argument that it allows DoS attacks to autotune for maximum effect
is not convincing to me, since the main opening in the current draft
is that you don't know *when* the headers will be done, more than what
size they'll end up having.

>2) headers get buffered at sender until complete and sent with declared length

This is crucial for implementation efficiency at receivers, be it
proxies or servers.  Knowing what comes makes memory allocation
much more efficient and it also informs a DoS policy with just about
the most crucial bit of data.

It could also allow opportunistic load-balancers to make "have session
cookie/does not have session cookie" decisions entirely on the length
field.

Implicit here is that by pushing this responsibility on the client,
the server is free to assume that once we have the length, the rest
will follow shortly, and DoS-mitigating timeouts can be on the order
of RTTs.

I have still not seen a real-world or strawman description
where "streaming headers" would be useful.

Yet you consistently defend this functionality as important
to you.

Please tell us what or how you would use it so we can gauge the
relevancy of this feature.

>3) declaring a maximum accepted frame size for headers or data frames or perhaps all frames.

The requirement to do so follows directly from abandonning 16 KB as the
hard and universal upper size.

>4) changing the base framing format to have a larger, but consistent size field (e.g. from 16 bits to 24).

NB: in draft -13 the field is only 14 bits

16 bits would certainly allow a much larger fraction of present day
HTTP requests to fit in a single DATA frame, but given that display
quality continually increases, that will erode rather rapidly as 4K
displays penetrate the market.

I'm not sure if it has been broken, but the last single-TCP land-speed-record
I remember was 2.5 Gbit/sec, which gives:

	At 14b length = 19000 frames/sec
	At 16b length = 2800 frames/sec
	At 24b length = 19 frames/sec
	At 31b length = 0.15 frames/sec

I think we should go with R1+31b for futures sake.

I can live with R8+24b as a more conservative starting point,
provided the 8 reserved bits are not poached for other purposes
right away.

>5) changing the base framing format to have a variable length size field

I *really* don't want this, unless there are no other options.  The
implementations which serious needs for speeds do not need to have
a complex cache-busting operation at this point in their processing.

>6) headers transmitted using multiple frames (i.e. continuations)

This one depends a LOT on the semantics of the compression, and the
outher limits proscribing possible behaviour.

If we talk about sending multiple smaller frames freight-train
style, ie: adjacent in bytes *and* in time, I could live with it.

But all we have done in that case is to have multipart jumbo frames,
with higher processing overheads and more failure modes, and their
only conceiveable benefit is that an implementation can used a
smaller fixed size transmit or receive buffer -- which is really
pointless, given that TCP is byte-serial, not frame-serial.

If we accept 2), 6) becomes pointless almost by definition.

If we are talking about anything else, including the current draft
-13, it becomes both a DoS concern, a HOL-blocker prone mechanism
and it introduces a host of corner cases, failure modes and needs for
timeouts because of the shared compression state lock-out effect.

There are many ways the current draft -13 can be improved, specifying
limits, timeouts, dropping compression shared state and so on.

But all of these workarounds are far more complex and prone to
interop trouble, than allowing larger frames and putting the entire
header-set in one frame.

>7) moving the flags on continuations to the last frame, possibly renaming
>those/changing the opcodes

	Begin at the beginning
	and go on until you come to the end.
	Then stop.
		-- Alice in wonderland

If we did so, there is no need for the CONTINUATION frame type,
you can just send HEADERS or PUSH_PROMISE until you're done (as indicated
by flag)

This would make both the specification and implementations simpler,
but it would not resolve the DoS opportunity of headers being
unconstrained in time and space, nor the multiplex restrictions
imposed by the shared compression state.

>8) nonblocking headers

I'm not sure I understand what you are refering to here.  Is this
where only the first frame of the header-set is compressed using
shared state and the rest not ?

If so:  I don't like it.  There are simpler and saner solutions.

>9) flow control for metadata (i.e. headers).

I though that the connection window was intended to allow balancing
multiple different http/2 connections against each other on a
limited bandwidth connection.  A client concern mostly.

I would make all frames subject to the connection window, that way
every single extension to HTTP/2 does not have to address this
issue from the start, it will be solved once and for all.

(A related question:  If I have a window of 8 bytes, am I allowed
to send the frame header, or do I need to wait until the window is
big enough to contain the entire frame before I can send any of the
frame ?  I pressume it must be the former, since the latter could
lead to dead-lock ?)

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.
Received on Tuesday, 8 July 2014 15:34:18 UTC

This archive was generated by hypermail 2.3.1 : Monday, 18 November 2019 18:02:00 UTC