Re: JSON-RPC vs. YASMIN. Was: A Critical Analysis of REST APIs for "Transaction Systems"

> How important is canonicalization in this case?  Why not just keep the
original raw message bytes around for whenever you need to verify the
signature?

When I wrote my email, it was looking like this message would have to be
assembled at the endpoint from various bits and pieces, hence there wasn't
a preexisting obvious format to use.

However, Ryan's comment lead us to think more about the fact that the
hashed data should match the Interledger packet. This should actually be a
design goal. We'll be talking more about the packet format on the call
tomorrow!

On Tue, Feb 7, 2017 at 9:17 PM Ryan Fugger <arv@ryanfugger.com> wrote:

How important is canonicalization in this case?  Why not just keep the
original raw message bytes around for whenever you need to verify the
signature?

On Tue, Feb 7, 2017 at 7:19 PM, Stefan Thomas <stefan@ripple.com> wrote:

Great point, Tony! I think objecthash is a really good candidate for us to
adopt for the payment request hashing used in IPR[1] and KEP[2].

[1]
https://github.com/interledger/rfcs/blob/master/0011-interledger-payment-request/0011-interledger-payment-request.md

[2] https://gist.github.com/sharafian/df7a4b7e2ff000248800b113f06f549a

On Tue, Feb 7, 2017 at 6:31 PM Tony Arcieri <tony@chain.com> wrote:

On Mon, Jan 30, 2017 at 7:49 AM, David Nicol <davidnicol@gmail.com> wrote:

having just read that linked document, it seems like the missing piece is a
requirement for normalizing the JSON some how before making the digest
which will get signed. Strong normalization before digestion is needed to
have meaningful signatures on JSON data. This can mean concatenating some
subset of the elements of the message in some particular order --
essentially rewriting it as Bencoded, just to sign it -- or normalizing the
JSON in such a way that the consumer of the JSON can renormalize the data
structure they're going to get in such a way that they can check its
digest, and then its signature.


There's an alternative to canonicalization: content-aware hashing that's
independent of the encoding.

Some examples are:

   - Ben Laurie's objecthash: https://github.com/benlaurie/objecthash
   - Peter Todd's proofmarshal:
   https://github.com/petertodd/python-proofmarshal/blob/master/__init__.py

Received on Tuesday, 21 February 2017 19:08:54 UTC