Re: Interleaving #481 (was Re: Limiting header block size)

On 3 June 2014 01:56, Greg Wilkins <gregw@intalio.com> wrote:
> The DoS vector is also against the server.  Allowing arbitrary large headers
> is an unconstrained memory commitment that each server has to make when
> accepting a new stream.

Well, if you stipulate that servers have to accept the request, then
they have to accept the request and all that it entails.

As I've pointed out before, a limit won't prevent effectively
unbounded commitment.  HPACK can be used to generate a much larger
state commitment than it's wire format requires.

Servers do have the option of rejecting the string and avoiding the
memory commitment.  Market forces notwithstanding.

> I truly do think this
> is a network neutrality issue

In that case, I recommend that you direct your comment to
http://www.fcc.gov/comments

> I also do not think you can so easily dismiss the intermediary issue as
> simple a problem for intermediaries to solve.  It may be very difficult to
> detect a bad actor before a DoS occurs.     Consider a load balancer in
> front of an auction site, which will typically receive a burst of bids just
> before the closing time.  A bad actor can be a perfectly reasonable client
> until a few seconds before the auction closes, at which time they suddenly
> send several incomplete header blocks.

You had me until this point.  An intermediary shouldn't commit
incomplete header blocks to a multiplexed connection.  In most cases,
they can't.  This is part of what I'm talking about when I say that
the intermediary has a responsibility to multiplex properly.

> One solution would be for load balancers to always reject streams with
> continuation frames.  This would work perfectly well for 99.9% of traffic.
> So one may ask why are continuation frames in the spec?

This would be a bad idea.  The existence of continuations is largely
orthogonal to header size.  You might as well reject requests based on
whether the current date when converted to RGB is a colour you don't
like.  Such behaviour leads to the sorts of perversions clients are
forced to do to get HTTP/1.1 requests working today.

> I believe that we should be able to apply a known limit to header sizes so
> that applications can be written in such a way that they know they will pass
> intermediaries and be acceptable to servers.
> An ecosystem where we say that unlimited headers are allowed, but then
> arbitrary undocumented and undiscovereable limits are applied, is a
> difficult space to effectively use meta data in.

I'm inclined to agree with the general principle, but am unable to
effectively turn this into practice without breaking things.  We've
already established (several times over now) that a hard limit is
unacceptable.  It's also been established that a hop-by-hop
negotiation for size doesn't work for something that is essentially
end-to-end.  Thus, we arrive at where we are today.

> But failing that, interleaving data frames from existing streams would at
> least somewhat address the attack I described above.

I think that it would enable it more than it would mitigate it.

Received on Tuesday, 3 June 2014 17:09:08 UTC