- From: Poul-Henning Kamp <phk@phk.freebsd.dk>
- Date: Thu, 04 Sep 2014 12:02:15 +0000
- To: Mark Nottingham <mnot@mnot.net>
- cc: Martin Thomson <martin.thomson@gmail.com>, Roy Fielding <fielding@gbiv.com>, HTTP Working Group <ietf-http-wg@w3.org>
-------- In message <4CFA7625-24C4-49CB-BCED-32598C181ACC@mnot.net>, Mark Nottingham wri tes: >Because chunk boundaries have no semantics in HTTP/2, whilst headers do. The argument for not tightening up header definitionis that "weird stuff" have semantics in practical HTTP/1.1 so we can't do away with it, in case we need to tunnel it. Chunk boundaries also have semantics in practical HTTP/1.1 and we just did away with them, even for tunneled HTTP/1.1. What is the difference ? There is a valid architectural point to decide ASCII vs. UTF-8, and we can and should debate that. But I havn't seen *anybody* say that need to be able to put NUL, STX or ANSI-escape sequences in HTTTP headers, so I don't understand why can't we outlaw them in HTTP/2.0, even if we don't settle the ASCII/UTF-8 question yet ? IMO nothing *in* the headers should contain 0x00-0x1f or 0x7f. What makes that decision impossible ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
Received on Thursday, 4 September 2014 12:02:45 UTC