W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2014

Re: h2 header field names

From: Martin Thomson <martin.thomson@gmail.com>
Date: Wed, 3 Sep 2014 12:36:12 -0700
Message-ID: <CABkgnnWPsmJqRtd0w2TxULZrfue9PkeFjJAXZQk_qfSfQbb-gg@mail.gmail.com>
To: "Roy T. Fielding" <fielding@gbiv.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
I think that this might be more substantial, so I'm going to confirm
my choices here before proceeding.

On 31 August 2014 19:37, Roy T. Fielding <fielding@gbiv.com> wrote:
> HTTP/2 allows header field names that are not valid header fields in
> the Internet Message Syntax used by HTTP/1.1 and cannot be registered
> as such with IANA.  An intermediary that is attempting to translate an
> HTTP/2 request or response containing such an invalid field name into
> an HTTP/1.1 message ought to ...

I'm currently thinking that the correct choice is to treat the request
or response as malformed
(http://http2.github.io/http2-spec/#malformed), which would force a
request or response to be treated as an error and dropped or ignored.

> HTTP/2 allows header field values that are not valid header field values
> in the Internet Message Syntax used by HTTP/1.1.  An intermediary that
> is attempting to translate an HTTP/2 request or response containing such
> an invalid field name into an HTTP/1.1 message MUST perform the following
> encoding of the octets that are not allowed in field-content: {pick one}.

I think that we can be stricter here than that.  The encapsulation
attack we're talking about can be more effectively avoided if we treat
the request or response as malformed (as above) if it contains octets
that we don't want.

The problem is that we're basically committed to taking pretty much
any old crap in values, including not just VCHAR, SP and TAB, but also
obs-text and even some of the ASCII control characters (BEL anyone?),
which HTTP/1.1 doesn't expressly permit, but we've evidence to suggest
that they are used, even widely.

This is why I think that the requirement to remove CR/LF from values
is still valuable.  HTTP/1.1 doesn't say anything about these because
they either form part of the parsing logic or obs-fold.  Roy's text
would force them to be encoded, which we could do by relying on the
translation to a single SP character mandated by RFC 7230.  That is
almost good enough, but we've generally avoided hiding errors that
way.  I'd rather be consistent with a zero tolerance error handling
philosophy and say that the request is malformed.

Here's what I have:

   The HTTP/2 header field encoding allows the expression of names that
   are not valid field names in the Internet Message Syntax used by
   HTTP/1.1.  Requests or responses containing invalid header field
   names MUST be treated as malformed (Section 8.1.2.6).  An
   intermediary therefore cannot translate an HTTP/2 request or response
   containing an invalid field name into an HTTP/1.1 message.

   Similarly, HTTP/2 allows header field values that are not valid.
   While most of the values that can be encoded will not alter header
   field parsing, carriage return (CR, ASCII 0xd), line feed (LF, ASCII
   0xa), and the zero character (NUL, ASCII 0x0) MUST NOT be translated
   verbatim by an intermediary.  An attacker might exploit direct
   translation of these octets to cause an intermediary to create
   HTTP/1.1 messages with illegal header fields, extra header fields, or
   wholly falsified messages.  Any request or response that contains a
   CR, LF or NUL character in a header field value MUST be treated as
   malformed (Section 8.1.2.6).

   Characters in header field values that are not valid according to the
   "field-content" rule (see [RFC7230], Section 3.2), SHOULD be percent-
   encoded before translated into HTTP/1.1 header field values.

I hate using SHOULD, but it seems like if I made that a MUST, it would
be immediately ignored.
Received on Wednesday, 3 September 2014 19:36:40 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 30 March 2016 09:57:10 UTC