- From: Richard Barnes <rlb@ipv.sx>
- Date: Wed, 5 Mar 2014 07:34:51 +0000
- To: Ryan Sleevi <sleevi@google.com>
- Cc: Mark Watson <watsonm@netflix.com>, "public-webcrypto@w3.org" <public-webcrypto@w3.org>
- Message-ID: <CAL02cgT-NEXrSw3A0kN4SpbadEFx12hD7hZdOqjLKbqT9NNe5w@mail.gmail.com>
Fair enough on the WebIDL RTT and ambiguity around "append". The critical thing I was trying to get at is that IIUC the UA is supposed to convert |r| and |s| to *byte* strings, in the same form as for BigInteger, then return the concatenation of those two byte strings. On Wed, Mar 5, 2014 at 2:38 AM, Ryan Sleevi <sleevi@google.com> wrote: > Then it's as equally unspecified - what does it mean to "append an > ArrayBuffer[View] to result". > > I find the original definition clearer because it doesn't require a > roundtrip through a WebIDL type (that someone may have messed with) > > On the question of HMAC bits vs bytes, this is an example of a change that > would be disastrous to make if any implementations had shipped (eg: sans > prefix). Microsoft's choice of prefixing, along with the vastly different > nature of the API, hopefully means that few people would run into the edge > of using the same HmacParams dictionary for (previous version, current > version) of the spec. > > I'm ambivalent on the change itself, and since no one has (to my > knowledge) shipped, it's "probably" safe to make. > > > > On Tue, Mar 4, 2014 at 1:47 AM, Richard Barnes <rlb@ipv.sx> wrote: > >> After taking a look at how this shows up in the spec, I think I'm OK with >> bits. It seems to me like in all cases where it matters, the bit length is >> constrained in the way you note. >> >> One related thing I noticed in the ECDSA definition: >> "Convert r to a bitstring and append the sequence of bytes to result" >> >> Might be helpful to state this in a way that makes clear (by reference) >> what happens for >> "Convert r to a BigInteger and append it to result" >> >> And likewise for "s". >> >> --Richard >> >> >> On Tue, Mar 4, 2014 at 12:13 AM, Mark Watson <watsonm@netflix.com> wrote: >> >>> Yes, well, since HMAC is the odd one out, it seemed changing that might >>> be simpler from the point of view of all the existing implementations, test >>> suites etc. Also, it's more common to refer to the number of bits in a key >>> (e.g AES-128) than bytes. In those cases you have to check for specific >>> values 128, 192, 256, not just for byte alignment. >>> >>> So, I still prefer bits. >>> >>> ...Mark >>> >>> >>> >>> Sent from my iPhone >>> >>> On Mar 3, 2014, at 3:59 PM, Richard Barnes <rlb@ipv.sx> wrote: >>> >>> Agree we should be uniform. Typed arrays are all byte oriented, so it >>> seems like aligning on BYTES (literally) would result in less ambiguity. >>> Otherwise you have to say how you pack left over bits. >>> >>> >>> >>> On Monday, March 3, 2014, Mark Watson <watsonm@netflix.com> wrote: >>> >>>> https://www.w3.org/Bugs/Public/show_bug.cgi?id=23159 >>>> >>>> The length property of an algorithm is everywhere specified as the >>>> length in BITS, except HMAC which defines it as the length in BYTES. >>>> >>>> The proposal is to align on BITS. >>>> >>>> Any objections ? >>>> >>>> ...Mark >>>> >>> >> >
Received on Wednesday, 5 March 2014 07:35:19 UTC