- From: Roberto Peon <grmocg@gmail.com>
- Date: Thu, 10 Jul 2014 18:34:18 -0700
- To: Greg Wilkins <gregw@intalio.com>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAP+FsNff747cGPqmU72S9mdRkL7RT2qf+6XN18tTNdXhMxQSBA@mail.gmail.com>
Greg-- thanks for starting this thread :) A bad actor in the world where one must finish the header must either send a very large header, or it must stop sending header data. A bad actor in the world where one may start, but not finish, many headers can keep opening up streams and consuming memory. This is multiplicatively worse. A smart bad actor does the above, *and* round-robins amongst the streams, keeping N-1 unable to make progress while making progress on one. This allows it to use 100 or 1000 times the memory of a non-malicious actor. Prevention requires the server to introspect quite a lot in order to detect the bad behavior and kill it. That is why I say the attack surface with such a scheme is much larger. -=R On Thu, Jul 10, 2014 at 6:20 PM, Greg Wilkins <gregw@intalio.com> wrote: > > > > On 11 July 2014 11:08, Roberto Peon <grmocg@gmail.com> wrote: > >> >> >> >> On Thu, Jul 10, 2014 at 5:59 PM, Greg Wilkins <gregw@intalio.com> wrote: >> >> >>> >>> I think we should not tie ourselves too much in knots over what bad >>> actors can do... so long as a bad actor can only screw with their own >>> stream and can't take more resources from an impl that it is prepared to >>> commit. >>> >>> >> When acting as a server (or a proxy) and listening to the clients on the >> untrusted internet, one will experience malicious clients. >> The issue with these clients is that the memory (and cpu, etc.) they >> consume could be better spent on other, hopefully non-malicious clients to >> improve their experience. >> A trivial example of this is the server's willingness to keep the >> connection open, and thus be able to receive and react to queries without >> having the client pay for the connection startup and TLS termination >> latency again. >> A server's willingness to do this will be related directly to the number >> of clients it can handle at that moment (for a well constructed >> server/proxy). >> >> In other words, if malicious clients can consume more memory (amongst >> other resources, but this is the easiest attack vector), it can cause a >> degradation in the quality of the service that the server can provide other >> clients. >> This is the reason why the design intent was to receive one set of >> headers at a time, though in a streaming fashion-- it limits the attack >> surface while not affecting anything in the common case where headers are >> small. >> >> > I think we are in strenuous agreement here. Yes we will definitely see > bad actors on the web. We must not let bad actors affect other streams. > We must not let bad actors go beyond the resources we are prepared to > commit to any actors. > > Now if we can detect a bad actor before it hits the resource limits we > have made for good or bad actors, then that is a bonus. > > So given that, I'm not sure how streamable/fragmentable headers helps with > bad actor detection. A bad actor can still leave the a :method field to > absolute last and refuse to send the last header frame or even the last > byte of the last header frame. This can be near impossible to > distinguish from a good actor on a bad network - timeouts will have apply. > > I can't really see how adding flow control to the mix changes any of this > very much. > > But anyway - not really relevant to the point you are trying to make in > the other thread. > > cheers > > > cheers > > > > > > -- > Greg Wilkins <gregw@intalio.com> > http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that > scales > http://www.webtide.com advice and support for jetty and cometd. >
Received on Friday, 11 July 2014 01:34:46 UTC