- From: Dave Longley <dlongley@digitalbazaar.com>
- Date: Thu, 1 Nov 2018 13:08:25 -0400
- To: David Chadwick <D.W.Chadwick@kent.ac.uk>, public-credentials@w3.org
On 11/01/2018 12:50 PM, David Chadwick wrote: > > > On 01/11/2018 15:51, Dave Longley wrote: >> On 10/29/2018 06:20 PM, Chris Boscolo wrote: >>> IMO, it just seems unsafe to allow data that has been signed to be >>> modified in any way and still produce the same signature. >> >> Could you give a concrete example for how this is related to >> canonicalization? This sounds like a general problem with any signature >> system -- and I think we all would agree that different data should hash >> differently and produce different signatures. >> >> Canonicalization is about representing information that is semantically >> the same in just one way; only if you change the meaning of the data >> should it change the signature. Which, I'd argue, is exactly what one >> wants, particularly for information that has multiple concrete syntax >> choices or graph-based information that can be represented in a number >> of different ways. To put it another way, I'd find it quite frustrating >> to have information that is semantically the same hash *differently*. >> That usually makes more work for me. > > This is the original X.509 DER way of looking at signing. The > alternative approach is to say, let the signer encode the data in any > one of the accepted concrete syntaxes, then sign the encoded data. When > you receive the signed data, validate the signature on the encoded data > Finally decode the data. But do not expect to be able to reconstruct the > encoded data and signature. Rather keep a copy of the received signed > data if you want to pass it onto a third party. The trade off here is > storage vs. more complex processing. Yes, that is one of the trade offs. And I think framing this in terms of trade offs is the right way to discuss it. Towards that end, other trade offs are documented (or are being documented). It's also important to discuss mitigations for some of the trade offs. For instance, most processing complexity issues can be solved with (shared open source) tooling because it can be buried as overhead in some other layer. This isn't always the case with storage complexity, which typically ends up directly impacting the application developer as it requires a "logical fork" somewhere. Trying to bury that in a layer may require making the same or similar storage choices as other application developers, which is orthogonal to the original problem. In other words, when talking about complexity trade offs, it's usually about: Where does the complexity surface? Is it in the application developer's space or buried in some other layer? How leaky are the abstractions used to help avoid contact with the complexity? -- Dave Longley CTO Digital Bazaar, Inc. http://digitalbazaar.com
Received on Thursday, 1 November 2018 17:08:53 UTC