Re: Bug 23159 - Inconsistent "length" property when generating keys (bits vs bytes)

After taking a look at how this shows up in the spec, I think I'm OK with
bits.  It seems to me like in all cases where it matters, the bit length is
constrained in the way you note.

One related thing I noticed in the ECDSA definition:
"Convert r to a bitstring and append the sequence of bytes to result"

Might be helpful to state this in a way that makes clear (by reference)
what happens for
"Convert r to a BigInteger and append it to result"

And likewise for "s".

--Richard


On Tue, Mar 4, 2014 at 12:13 AM, Mark Watson <watsonm@netflix.com> wrote:

> Yes, well, since HMAC is the odd one out, it seemed changing that might be
> simpler from the point of view of all the existing implementations, test
> suites etc. Also, it's more common to refer to the number of bits in a key
> (e.g AES-128) than bytes. In those cases you have to check for specific
> values 128, 192, 256, not just for byte alignment.
>
> So, I still prefer bits.
>
> ...Mark
>
>
>
> Sent from my iPhone
>
> On Mar 3, 2014, at 3:59 PM, Richard Barnes <rlb@ipv.sx> wrote:
>
> Agree we should be uniform.  Typed arrays are all byte oriented, so it
> seems like aligning on BYTES (literally) would result in less ambiguity.
> Otherwise you have to say how you pack left over bits.
>
>
>
> On Monday, March 3, 2014, Mark Watson <watsonm@netflix.com> wrote:
>
>> https://www.w3.org/Bugs/Public/show_bug.cgi?id=23159
>>
>> The length property of an algorithm is everywhere specified as the length
>> in BITS, except HMAC which defines it as the length in BYTES.
>>
>> The proposal is to align on BITS.
>>
>> Any objections ?
>>
>> ...Mark
>>
>

Received on Tuesday, 4 March 2014 09:48:23 UTC