- From: Willy Tarreau <w@1wt.eu>
- Date: Thu, 21 Jan 2021 09:36:29 +0100
- To: Cory Benfield <cory@lukasa.co.uk>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
On Thu, Jan 21, 2021 at 07:49:52AM +0000, Cory Benfield wrote: > Fortunately, as noted in the original GitHub thread, I don't think any > browsers are subject to a complete response-smuggling attack. This is > mostly because they appear to enforce different rules around the > termination of the header compared to termination of header fields. I > updated my test to attempt a response smuggling attack and no browser > appeared to fall for the smuggled response. CRCR does not appear to > cause any browser I tested to conclude the header is complete. This is > good news! > > However, the parse is still bad. My test example sends: > > HTTP/1.1 302 Found<CRLF> > Location: /redirect<CRLF> > Content-Length: 0<CRLF> > x-custom-header: ok<CRLF> > x-custom-header-2: y<CRCR>HTTP/1.1 404 Not Found<CR>Content-Length: > 0<CRCR><CRLFCRLF> > > Chrome does not get tricked by the 404: it does correctly spot that > the CRLFCRLF is the terminator for the header. However, it mis-parses > the header fields. In the web inspector Chrome clearly notes _two_ > "Content-Length: 0" header field lines, and if I add additional header > field lines to the second block those will be parsed as though they > were syntactically correct header field lines. Hmmm that's still ugly, it means for example that you could send a Location header in a dummy header, or some cache-control to enforce long-time storage, or even set cookies that are not inspected by whatever privacy inspection tool is installed on the machine. And by the way I just tested the injection of transfer-encoding, and it does indeed affect Chromium. There's what I've done: $ printf "HTTP/1.1 200 OK\r\nContent-type: text/plain\r\nContent-Length: 6\r\nx-fun: yes\rTransfer-encoding: chunked\r\n\r\n22\r\nHello\nThis should not be there :-(\r\n0\r\n\r\n" | nc -lp7777 On Firefox it correctly says: 22 He On Chromium it says: Hello This should not be there :-( Thus it IS definitely possible to bypass upfront filtering to deliver some uninspected contents to the browser this way. > For my part, I don't think there's a huge security risk here. Making a device consume more data than expected is always a problem. The severity depends on what can be done through this, but above I'm definitely seeing something that opens large doors to those who want to play with it. > What I > do think is that we need to be cautious when our specs differ widely > from actual deployment. It has always been known that browsers need to resort to a lot of ugly tricks due to various bogus applications that remain in field that get broken as soon as they try to be more compliant, but that must not warrant making all implementations vulnerable, especially intermediaries which could become amplifiers of such issues by normalizing what they received on one side. However it would be nice that such horrible tricks, if still required (which for some part I doubt given that different browsers already act differently), be clearly documented wiht strict limitations on their side effects (e.g. never execute a script downloaded this way, never follow a redirect, nor cache or whatever). I can understand the need to support horribly bogus legacy applications, and the rise of IoT really does not help, but these must not become the standard. If at least the browsers could emit a warning on first access to such pages like they do with bad certs, that would be a great way to discourage new developers from continuing to deploy such bugs. Willy
Received on Thursday, 21 January 2021 08:36:49 UTC