Re: Interleaving #481 (was Re: Limiting header block size)

Martin,

The DoS vector is also against the server.  Allowing arbitrary large
headers is an unconstrained memory commitment that each server has to make
when accepting a new stream.

Sure, servers can apply an arbitrary limit, but that makes the clients
uncertain as to what can or can't be sent.   The servers are also
vulnerable to market forces - as client vendors may choose to
allow/encourage large headers, as happened with the two connection
limit.    I truly do think this is a network neutrality issue as growth in
acceptable header size will effectively prevent resource constrained
operators from terminating large numbers of connections.

I also do not think you can so easily dismiss the intermediary issue as
simple a problem for intermediaries to solve.  It may be very difficult to
detect a bad actor before a DoS occurs.     Consider a load balancer in
front of an auction site, which will typically receive a burst of bids just
before the closing time.  A bad actor can be a perfectly reasonable client
until a few seconds before the auction closes, at which time they suddenly
send several incomplete header blocks.  Such an attack will be able to
effectively lock out a substantial number of clients over a few vital
seconds - even those with established connections.  The intermediary will
not be able to identify the bad actors until it is too late.   Note that
this might not even be malicious, but can be caused by network failure
(assuming that continuations are actually ever routinely needed).

One solution would be for load balancers to always reject streams with
continuation frames.  This would work perfectly well for 99.9% of traffic.
So one may ask why are continuation frames in the spec?

I believe that we should be able to apply a known limit to header sizes so
that applications can be written in such a way that they know they will
pass intermediaries and be acceptable to servers.
An ecosystem where we say that unlimited headers are allowed, but then
arbitrary undocumented and undiscovereable limits are applied, is a
difficult space to effectively use meta data in.

But failing that, interleaving data frames from existing streams would at
least somewhat address the attack I described above.

Finally, I understand that it may be simple to conceptually consider a
HEADER + CONTINUATION* sequence as a single unit, but the reality of it is
that sending/receiving such a sequence cannot be even approximately
considered an atomic state transition, so the state machine I drafted in
#484 is certainly closer to reality that the current state machine in the
document (and as we are approaching our actual implementation of it, I'm
sure I will discover the incomplete parts :)


cheers






On 3 June 2014 01:02, Martin Thomson <martin.thomson@gmail.com> wrote:

> On 22 May 2014 06:13, Michael Sweet <msweet@apple.com> wrote:
> > https://github.com/http2/http2-spec/issues/481
>
> Quoting:
> > Currently the HTTP2 draft requires that HEADER frames be contiguous.
> Since a header block can be arbitrarily large, this presents both an
> obvious DoS vector and a practical issue with streaming performance and
> preservation of the priority scheme that HTTP/2 provides.
> >
> > A simple solution is to allow intervening DATA frames on other,
> established streams. That will allow high-priority data through without
> major interruptions.
>
> As a DoS vector, the only parties being denied service are the parties
> engaging in the HTTP/2 connection, which is not an issue.  Those
> parties have far better means of denying each other service than this.
>
> The impact on multiplexing is largely congruent with the above.  The
> best response here is "don't do that" with respect to large header
> blocks.  In the worst cases (those in the tail of Richard's stats),
> this is ~64k, which will have those parties adversely affected.
>
> The only place this is an actual problem is where messages are merged
> from multiple clients (or origin servers) onto a single connection by
> an intermediary.  In those cases, the intermediary is basically
> responsible.
>
> An intermediary can apply policy to prevent this from happening.  It
> seems like many already do this, by setting a limit (8k being fairly
> common).  Other options include putting bad actors into TIME_WAIT,
> putting all bad actors into a separate low bitrate connection.
>
> The main reason I'm against this change is that it prevents the sort
> of clean abstraction that we currently have, which states that HEADERS
> + *CONTINUATION or PUSH_PROMISE + *CONTINUATION can be treated as a
> single contiguous unit.  An implementation that fails to apply that
> abstraction ends up with the state machine Greg shared with us in #484
> (which is incomplete, BTW).
>
>


-- 
Greg Wilkins <gregw@intalio.com>
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.

Received on Tuesday, 3 June 2014 08:56:46 UTC