- From: Amos Jeffries <squid3@treenet.co.nz>
- Date: Mon, 28 Oct 2013 17:49:55 +1300
- To: ietf-http-wg@w3.org
On 28/10/2013 3:24 p.m., Joseph Salowey (jsalowey) wrote: > While the primary motivation for ALPN is the HTTP work, it may not be the only consumer and we did not see the need to restrict the possible values. If the work associated to HTTP wants to restrict the values that they use then that can be done within the context of that work. Other usages would still be free to define values that are expressed in other ways. I think this design allows for flexibility so we do not have to define an extension for each usage. This comparison methodology Martin is proposing has nothing HTTP-specfic about it. It is simply the most flexible and cross-protocol compatible definition you can use. Case-insensitivity and UTF-8 which have both been put forward as properties of the ALPN token are in fact major *limitations* on what can be done. UTF-8 implies all the octet mapping between language variants and alphabets. Case-insensitivity implies mapping between octet case values even in 7-bit ASCII. Defining it as opaque 8-bit content with octet-by-octet/byte-by-byte comparison is *The* most flexible definition to use by far. > I still don't see a reason why allowing additional representations beyond what HTTP wants to use as problematic as long as we can represent what HTTP needs. You will still need to handle unwanted character sets since implementations may choose not to follow the specification for malicious and benign reasons. Representations beyond what HTTP wants is not the issue. Preventing other non-HTTP definitions from "accidentally" mapping to the HTTP token *is* a problem. Going past octet-by-octet comparison to anything more complex introduces a potential for mapping errors and adds needless complexity. Amos Cheers, Joe On Oct 26, 2013, at 3:30 PM, "Martin J. Dürst" <duerst@it.aoyama.ac.jp> wrote: >> >> On 2013/10/22 6:03, Andrei Popov wrote: >>> While ALPN was introduced primarily to enable the negotiation of HTTP and SPDY protocol versions within the TLS handshake, the intent is for other (non-Web) applications to be able to negotiate their protocols using the same TLS extension. It is conceivable that different applications will prefer different representations of their protocol IDs. >> Yes, If some of them, like HTTP, prefer upper-case because it has always been upper-case, and others prefer lower-case, that's not going to be a problem. >> >>> Even on this thread, I believe both US-ASCII and UTF-8 protocol IDs have been mentioned. >> I saw UTF-8 mentioned as a theoretical idea, but I didn't see any actual UTF-8 example. Please tell me in case I missed it. >> >> More strongly, as I have said before, I think for *protocol* identifiers, UTF-8 is entirely and completely unnecessary. >> >>> From this perspective, having the flexibility at the TLS layer appears beneficial. >> Flexibility is good, too much flexibility is bad. >> >>> Treating application protocol IDs as opaque octet strings also allows efficient protocol ID matching at the TLS layer. >> There's a huge difference between *allowing* arbitrary octet strings (which is completely unnecessary and actually problematic if you have octets e.g. in the C0 range show up in displays) and *comparing* them octet-by-octet (which is good for efficiency). >> >> So please fix them to say that they are limited to printable ASCII and are compared byte-by-byte. That will be flexible enough without being too flexible, and efficient on top of it. >> >> Regards, Martin. >> >
Received on Monday, 28 October 2013 04:50:23 UTC