- From: Mark Nottingham <mnot@mnot.net>
- Date: Fri, 26 May 2023 08:38:09 +1000
- To: Roy Fielding <fielding@gbiv.com>
- Cc: Tommy Pauly <tpauly@apple.com>, HTTP Working Group <ietf-http-wg@w3.org>
Hi Roy, > On 26 May 2023, at 3:21 am, Roy T. Fielding <fielding@gbiv.com> wrote: > > I think (b) is unnecessary given that HTTP is 8-bit clean for UTF-8 > and we are specifically talking about new fields for which there > are no deployed parsers. Yes, I know what it says in RFC 9110. Yes, the parsers may be new, but in some contexts, they may not have access to the raw bytes of the field value. Many HTTP libraries and abstractions (e.g., CGI) assume an encoding and expose strings; some of those may apply the advice that HTTP has documented for many years and assume ISO-8859-1. Yes, in many cases you can use UTF-8 on the wire successfully. However, making that assumption is a local convention; we can't assume that it holds for the entire Internet, because we don't know all of the various implementations that have been deployed and how they behave. All we know is a) how the implementations we've seen behave, and b) what we've written down before. In the past we've made decisions like this and chosen to be conservative. We could certainly break that habit now, but we'd need (at the least) to have a big warning that this type might not be interoperable with deployed systems. Personally, I don't think that's worth it, given the relative rarity that we expect for this particular type, and the relatively low overhead of encoding. > The PR doesn't clearly express any of these points. It says the > strings contain Unicode (a character set) but they obviously don't; > they contain sequences of unvalidated pct-encoded octets. > This allows arbitrary octets to be encoded for something that > is supposed to be a display string. [...] > If this is truly for a display string, the feature must be > specific about the encoding and allowed characters. > My suggestion would be to limit the string to non-CNTRL > ASCII and non-control valid UTF-8. We don't want to allow > anything that would twist the feature to some other ends. > > Assuming we do this with pct-encoding, we should not allow > arbitrary octets to be encoded. We should disallow encodings > that are unnecessary (normal printable ASCII aside from % and "), > control characters, or octets not valid for UTF-8. That can > be specified by prose and reference to the IETF specs, or > we could specify the allowed ranges with a regular expression. > Either one is better than allowing arbitrary octets to be encoded. I think that's reasonable and we can discuss improvements after adopting the PR. > In general, it is safer to send raw UTF-8 over the wire in HTTP > than it is to send arbitrary pct-encoded octets, simply because > pct-encoding is going to bypass most security checks long enough > for the data to reach an applications where people do stupid > things with strings that they assume contain something that is > safe to display. That's an odd assertion - where are those security checks taking place? > Note that I am not saying that we should consider normalization > or any other weirdness specific to Unicode. We don't need to. > We just need to stay within the confines of what has already > been defined as valid and safe UTF-8. Everything else is being > actively targeted by pentesters and script kiddies, on every > public server on the Internet, to the point where we have to > block it within CDN configurations just to avoid overloading > the origin servers. Understood. Cheers, -- Mark Nottingham https://www.mnot.net/
Received on Thursday, 25 May 2023 22:38:18 UTC