- From: Dennis Jackson <ietf@dennis-jackson.uk>
- Date: Thu, 9 Feb 2023 17:24:51 +0000
- To: ietf-http-wg@w3.org
- Message-ID: <9978dd10-0dbd-8352-7080-38ead3d1dde4@dennis-jackson.uk>
Hi Justin, responses inline: On 08/02/2023 20:32, Justin Richer wrote: > First, it’s important to know that this attack relies on their being a > weakness in the underlying cryptographic primitive: namely, that a > signature collision can be forced between two different, unrelated > signatures bases. This 'weakness' is found all of the digital signature schemes mentioned in the draft: RSA-PSS, RSA-PKCSv1.5, ECDSA and Ed25519 (as standardized in RFC 8032). As far as I am aware, only a variant of Ed25519 with additional checks has been proven [1] not to have this behavior and its a coins toss whether Ed25519 libraries do the additional checks or not (e.g. LibSodium does, GoLang doesn't). > [...] The attack relies on weaknesses in underlying crypto, and this > spec /intentionally/ does not limit the kinds of digital signature or > MAC that can be used by an application. It’s entirely reasonable that > an application that requires the kind of non-repudiation link that > drove the addition of the request-response binding function would also > specify a signing algorithm, key properties, and any additional > parameters that prevent that kind of forgery as well. If the draft is only going to work with very specific algorithms, the draft should recommend at least one or two possible choices. As no commonly used signature schemes have the properties you want, this is not possible... > Where I disagree with Dennis is the proposed mitigation. I do not > think it is reasonable to special-case the “signature” field in the > way that is proposed. While doing so would mitigate the specific > problem here, I believe there are several major problems with this: > [reordered for clarity] > - Special casing the “signature" field begs the question of whether > other fields could be forged in a similar fashion (content-digest, for > instance?), causing related but slightly different attacks. This kind of problem is unique to signatures. Cryptographic hashes need to be collision-resistant for all inputs. > - Adding the original signature base in verbatim to the old signature > base is going to create a component value with newline characters. > These characters are explicitly disallowed in order to prevent other > forms of constructions attacks, where the attacker can force in one > value that looks like it’s terminating a previous value to overwrite > something else. > > - Furthermore, I think there’s an alternative approach which could fit > the work of an extension: have a new derived component with the > behavior listed in the proposal below. For argument sake let’s call > this “@signature” and we can even have it take a parameter to select > which signature of the target message is being called upon. The > component value of this derived component would be the signature base > of the target signature, re-generated and then encoded, perhaps as an > sf-string item to escape newlines and quotes and other potentially > problematic items, or even enforce use of the “bs” flag that would > give us the encoding properties we’re after. That way there’s no > special casing of the field, it’s a production rule of the derived > component definition that’s either supported or not by the > implementation — and any derived components that aren’t recognized > cause an error and cessation of processing of the signature base. > Annabelle and I discussed doing exactly this a while ago but rejected > it because of the additional complexity it adds on top of just signing > a header like any other header. Happy to wordsmith the encoding. Watson suggested using a hash to compress the base as well, which might also be attractive to reduce memory footprint on the requestor, or your sketch of "@signature" would work. However, this can't be an extension, it needs to replace the current mechanism. > - Special casing a single field like this is bound to get missed by > implementations and libraries, and I fully believe handling of this > would be iffy at best. Saying that developers will always rely on a > full-blown library do handle signature generation is, I believe, not > realistic. While I think it’s preferable for there to be a well-vetted > and supported library dedicated to a core function, the truth is that > people out there are going be implementing this inline in a lot of > places because the signature base generation is ultimately about > deterministic string production and off-the-shelf libraries can be > used for the crypto primitives. > > - We can call out the problem case with sufficient detail to let > application developers decide what to do with their application of > HTTP Signatures. This spec was always intended to be somewhere in the > middle of an application stack, with applications defining the > properties they need in the end result. Adding this kind of complexity > feels like a pretty major foot-gun that would be better addressed in a > wholistic manner up the stack. I think these are actually very strong arguments in favor of automatically handling the signature field as I suggested: 1. If you think libraries are likely to miss this special case, application developers are doubly likely to miss it. *The harder this appears to get right, the more important it is to fix it in the IETF spec. * 2. If libraries or inline implementations do miss the special case handling, they'll fail to interoperate and so discover their mistake before production. If it's left to application designers, they'll interop fine and omitted request fields won't be discovered until someone files the CVE. *Failing to interoperate with insecure implementations is a feature, not a bug. * 3. If we fix HTTP Message Signatures, all the drafts depending on it can stay the same. Otherwise, you need to update every downstream spec like FAPI Message Signing [2] and repeat the same advice in each "Ensure to include every signed part of the request in the responder's signature". *Let's get this right here, so that no one else has to worry about it. * I'd also point out that the implementation effort required to do this automatically is tiny as each library already has to have a function for building a signature base for a message which is used in both ordinary signing and verification. My proposed change is just calling this function once more when processing a signature component. Best, Dennis [1] https://dennis-jackson.uk/assets/pdfs/ed25519.pdf [2] https://openid.bitbucket.io/fapi/fapi-2_0-message-signing.html
Received on Thursday, 9 February 2023 17:25:11 UTC