Re: HTTP/2: allow binary data in header field values

Reviving this thread now that we have HTTP Core, with semantics
separated from HTTP/1.1 and HTTP/2 messaging.

As of right now, the header values at the semantics layer are limited
to the visible characters:

    field-value = *( field-content / obs-fold )
    field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ]
    field-vchar = VCHAR / obs-text


    VCHAR = %x21-7E
    obs-text = %x80-FF

I'm fine with the existing restrictions being enforced at the HTTP/1.1
messaging layer, where binary values could be converted according to
the "Byte Sequence" rules from Structured Headers, however both HTTP/2
and HTTP/3 are perfectly capable of transmitting all octets, so the
semantics layer shouldn't be limited by the fact that HTTP/1.x is a
text-based protocol.

If this is published as-is, it's going to prevent use of binary values
in header fields "for the rest of our careers" (according to the WG
chairs), so I guess it's "now or never" kind of thing.


Best regards,
Piotr Sikora

On Tue, Aug 29, 2017 at 10:52 AM Mike Bishop
<> wrote:
> As with other typed header fields (and let's be clear, binary blob is just another type), this isn't about changing HTTP/2, it's about changing HTTP..  Currently, header fields in HTTP are, by definition, sequences of octets with a scoped range of valid values.  If you change the allowed values, that's a change at the semantic layer, not to any given transport mapping.  This is the HTTP WG; we can do that, but let's be clear what we're talking about.  But we'd need to have reasonable ways of ensuring that the values are sanitized before they're passed to "legacy" HTTP consumers.
> As you note, HTTP/2 and HPACK are already perfectly capable of transporting these octets.  You can even Huffman-encode a binary blob if you want -- all possible values are listed in the table, though non-ASCII octets are severely disadvantaged.  That's precisely what the Security Considerations says -- HTTP/2 (i.e. the TCP mapping) is capable of transporting header values that aren't valid HTTP, and it's the HTTP layer's responsibility to validate that.  Obviously, if you rev HTTP to make those valid values, those checks would be modified.  The HTTP/QUIC mapping is no different -- it's capable of transporting these values already, but the HTTP layer knows they're not valid.
> On the whole, I can see niche situations where this might be useful, but I think it will be difficult to deploy generally.  Our stacks essentially act as HTTP/1.1-to-2 intermediaries within client and server; we don't assume that the apps above our layer are HTTP/2-aware, though obviously we expose ways to take advantage of extra features.  Unless we wanted to add additional header set/get APIs that supported typing, I suspect we would initially opt not to advertise this extension rather than base64-encode headers upon arrival.  That's just extra work for no apparent benefit.
> And if we're going to go this route and modify HTTP itself, let's have a reasonable set of types instead of just adding one at a time.
> -----Original Message-----
> From: Piotr Sikora []
> Sent: Monday, August 28, 2017 6:35 PM
> To: HTTP Working Group <>
> Cc: Craig Tiller <>
> Subject: HTTP/2: allow binary data in header field values
> Hi,
> as discussed with some of you in Prague, I'd like remove the restriction on CR, LF & NUL characters and allow binary data in header field values in HTTP/2.
> Both HTTP/2 and HPACK can pass binary data in header field values without any issues, but RFC7540 put an artificial restriction on those characters in order to protect clients and intermediaries converting requests/responses between HTTP/2 and HTTP/1.1.
> Unfortunately, this restriction forces endpoints to use base64 encoding when passing binary data in headers field values, which can easily become the CPU bottleneck.
> This is especially true in multi-tier proxy deployments, like CDNs, which are connected over high-speed networks and often pass metadata via HTTP headers.
> The proposal I have in mind is based on what gRPC is already doing [1], i..e.:
> 1. Each peer announces that it accepts binary data via HTTP/2 SETTINGS option,
> 2. Binary header field values are prefixed with NUL byte (0x00), so that binary value 0xFF is encoded as a header field value 0x00 0xFF.
> This allows binary-aware peers to differentiate between binary headers and VCHAR headers. In theory, this should also protect peers unaware of this extension from ever accepting such headers, since RFC7540 requires that requests/responses with headers containing NUL byte
> (0x00) MUST be treated as malformed and rejected, but I'm not sure if that's really enforced.
> 3. Binary-aware peers MUST base64 encode binary header field values when forwarding them to peers unaware of this extension and/or when converting to HTTP/1.1.
> 4. Binary header field values cannot be concatenated, because there is no delimiter that we can use.
> NOTE: This proposal implies that endpoints SHOULD NOT use binary header field values before receiving HTTP/2 SETTINGS from the peer.
> However, since, at least in theory, all RFC7540-compliant peers unaware of this extension MUST reject requests with headers containing NUL byte (0x00) with a stream error, endpoints could opportunistically use binary header field values on the first flight and assume that if peer isn't aware of this extension, then it will reject the request, which can be subsequently retried with base64 encoded header field values.
> I'd like to hear if anyone strongly disagrees with this proposal and/or the binary data in header field values in general. Otherwise, I'm going to write a draft and hopefully we can standardize this before HTTP/2-over-QUIC, so that binary header field values can be supported there natively and not via extension.
> [1]
> Best regards,
> Piotr Sikora

Received on Wednesday, 7 November 2018 04:59:57 UTC