W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2014

Re: h2 header field names

From: Julian Reschke <julian.reschke@gmx.de>
Date: Thu, 04 Sep 2014 17:57:06 +0200
Message-ID: <54088BD2.1030100@gmx.de>
To: Jason Greene <jason.greene@redhat.com>
CC: Amos Jeffries <squid3@treenet.co.nz>, ietf-http-wg@w3.org
On 2014-09-04 17:43, Jason Greene wrote:
>
> On Sep 4, 2014, at 10:03 AM, Julian Reschke <julian.reschke@gmx.de> wrote:
>
>> On 2014-09-04 16:40, Amos Jeffries wrote:
>>> ...
>>> ...
>>>> Regardless, what are you trying to accomplish with binary header
>>>> values?
>>>>
>>>
>>> Good question. In a nutshell, the gain is simplicity for
>>> implementations no longer having to include base-64 encoder/decoders
>>> or spend CPU cycles doing the coding.
>>> ...
>>
>> I think the question is: why do you need binary data in header field values in the first place?
>
> Security tokens, cryptographic signatures, complex encoded types (e.g. ASN.1), specialty compressed values, precise serialization of IEEE754 floating point values, middleware intermediary tracking/routing information, unix timestamps etc

And all of these can be converted in some sensible way to ASCII, right?

The main reason why I'm concerned about arbitrary binary data is that 
all of the HTTP APIs I'm aware essentially work on character sequences, 
not octet sequences (in header fields).

The lack of a standard encoding in HTTP/1.1 is already a problem; but 
having a mix of both character and octet based fields with no generic 
and reliable way to distinguish these concerns me a lot.

VCHAR works reliably, and gives you 94 characters, so you might be able 
to squeeze a few more bits into these than using BASE64. But is it worth 
the trouble?

Best regards, Julian
Received on Thursday, 4 September 2014 15:57:43 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 30 March 2016 09:57:10 UTC