Re: Consensus call to include Display Strings in draft-ietf-httpbis-sfbis

On May 25, 2023, at 3:38 PM, Mark Nottingham <mnot@mnot.net> wrote:

> Hi Roy,
> 
>> On 26 May 2023, at 3:21 am, Roy T. Fielding <fielding@gbiv.com> wrote:
>> 
>> I think (b) is unnecessary given that HTTP is 8-bit clean for UTF-8
>> and we are specifically talking about new fields for which there
>> are no deployed parsers. Yes, I know what it says in RFC 9110.
> 
> Yes, the parsers may be new, but in some contexts, they may not have access to the raw bytes of the field value. Many HTTP libraries and abstractions (e.g., CGI) assume an encoding and expose strings; some of those may apply the advice that HTTP has documented for many years and assume ISO-8859-1.

That's not a problem in practice, since the data does not change.
It just looks like messy characters on display.

What would be a problem is if an implementation transcoded the values 
incorrectly while being parsed, or used code-point lengths instead
of octet lengths for measuring the memory allocated in copies.
But again, we are not breaking such systems: they are already broken
and insecure, and at worst we are doing folks a service by surfacing
the bad code in a visible way.

The valid systems we might be breaking would be those that parse
for high-bit octets and reject the message as invalid. I do not
know of any such systems because of the legacy of ISO-8859-*
(especially among Cyrillic servers). In any case, such systems
don't use display strings.

However, I agree that it is hard for me to argue against my
own long history of being unable to adopt UTF-8 in HTTP.
I just find it annoying to assume that a totally new parser
of a totally new field should somehow be constrained in the
parsing of its values by a mere perception of what might be
the case for legacy parsers that shouldn't even be looking
at new fields.

It would be different if we knew of an example that fails.

> Yes, in many cases you can use UTF-8 on the wire successfully. However, making that assumption is a local convention; we can't assume that it holds for the entire Internet, because we don't know all of the various implementations that have been deployed and how they behave. All we know is a) how the implementations we've seen behave, and b) what we've written down before.

I prefer to think locally and act globally.

> In the past we've made decisions like this and chosen to be conservative. We could certainly break that habit now, but we'd need (at the least) to have a big warning that this type might not be interoperable with deployed systems. Personally, I don't think that's worth it, given the relative rarity that we expect for this particular type, and the relatively low overhead of encoding.

If this were an important use case, I would agree with you.
We are talking about a display string, which seems to be
the perfect opportunity to find out what we can get away
with changing.

>> The PR doesn't clearly express any of these points. It says the
>> strings contain Unicode (a character set) but they obviously don't;
>> they contain sequences of unvalidated pct-encoded octets.
>> This allows arbitrary octets to be encoded for something that
>> is supposed to be a display string.
> [...]
>> If this is truly for a display string, the feature must be
>> specific about the encoding and allowed characters.
>> My suggestion would be to limit the string to non-CNTRL
>> ASCII and non-control valid UTF-8. We don't want to allow
>> anything that would twist the feature to some other ends.
>> 
>> Assuming we do this with pct-encoding, we should not allow
>> arbitrary octets to be encoded. We should disallow encodings
>> that are unnecessary (normal printable ASCII aside from % and "),
>> control characters, or octets not valid for UTF-8. That can
>> be specified by prose and reference to the IETF specs, or
>> we could specify the allowed ranges with a regular expression.
>> Either one is better than allowing arbitrary octets to be encoded.
> 
> I think that's reasonable and we can discuss improvements after adopting the PR.

I think the pct-encoding feature is actively dangerous without
those constraints because it encourages a means to bypass HTTP's
normal safeguards. I don't want to discuss them as improvements.

>> In general, it is safer to send raw UTF-8 over the wire in HTTP
>> than it is to send arbitrary pct-encoded octets, simply because
>> pct-encoding is going to bypass most security checks long enough
>> for the data to reach an applications where people do stupid
>> things with strings that they assume contain something that is
>> safe to display.
> 
> That's an odd assertion - where are those security checks taking place?

In places like the Fastly config, right now, though I only do that
for an incoming request-target when I don't need a premium WAF.
For example (extracted from an error snippet):

   if (var.path ~ {"%[0-7][0-9A-Fa-f]"}) {
     set obj.http.x-error = "Forbidden encoded ASCII in URL path";
     set obj.status = 403;
     set obj.response = "Forbidden";
     return (deliver);
   }

[Note that this is making assumptions about what is allowed
in a URL path that is specific to the origin servers behind
this CDN. It is not a universal config.]

Others use a WAF (or mod_security rules) applied to various
parts of a request message, or just bayesian analysis of
example fails.

What I mean by this odd assertion is that raw UTF-8 sent
through the message parsing algorithm of HTTP will result
in a very obvious message for recipients on the backend,
even if it contains unwanted characters, whereas pct-encoding
makes the message look safe until passes though the checks
and it reaches a point in later processing where an application
(perhaps unaware of the source of that data) foolishly
decodes the string without expecting it to contain
arbitrary octets that might become command invocations,
request smuggling, or cache poisoning.

Of course, there is nothing preventing such pct-encoding from
being included in any non-literal part of an HTTP message,
which is what pentesters and script kiddies are constantly
running against our Web properties (and those of our CMS
customers) in the hope of finding some application, somewhere
downstream, that will fail to validate the data it receives.
This feature won't change that.

The problem is that it takes what is normally considered
an evil encoding (if found anywhere other than an expected
URI-reference or x-url-encoded content) and calls it a
"good encoding" for a display string, which means we will
have to worry about breaking a new feature of HTTP instead
of just blocking all bad strings.

Even so, I can live with pct-encodings when they are restricted
to a reasonably safe range of characters for display.

For example,

% pcre2grep -e '^([\x20-\x21\x23-\x24\x26-\x5B\x5D-\x7E]|\x5C[\x22\x5C]|%((2[25])|([Cc][2-9A-Fa-f]%[89A-Fa-f][0-9A-Fa-f])|([Dd][0-9A-Fa-f]%[89A-Fa-f][0-9A-Fa-f])|([Ee][0-9A-Fa-f](%[89A-Fa-f][0-9A-Fa-f]){2})|([Ff][0-4](%[89A-Fa-f][0-9A-Fa-f]){3})))*$'

which, IIRC, is a safe subset of display string characters
that allows printable ASCII (aside from " and %), safe
non-ASCII UTF-8 as pct-escapes (regardless of current
Unicode code points), and disallows the unsafe UTF-8.

Alternatively, require that pct-encoding be limited to %22, %25,
and pct-encoded sequences of valid non-ASCII, non-control, UTF-8
octets, as defined by [UTF-8].

It's somewhat pedantic, but guides implementations toward
detecting such errors rather than ignoring them as someone
else's problem. Also, it is something people can implement with
interoperability, rather than a string of Unicode characters
in general (which isn't).

Cheers,

....Roy

Received on Friday, 26 May 2023 19:38:28 UTC