- From: <bugzilla@jessica.w3.org>
- Date: Fri, 21 Nov 2014 19:53:08 +0000
- To: public-webcrypto@w3.org
https://www.w3.org/Bugs/Public/show_bug.cgi?id=27402
Bug ID: 27402
Summary: Specify the behavior when returning an octet string
with a particular _bit_ length
Product: Web Cryptography
Version: unspecified
Hardware: PC
OS: Linux
Status: NEW
Severity: normal
Priority: P2
Component: Web Cryptography API Document
Assignee: sleevi@google.com
Reporter: ericroman@google.com
CC: public-webcrypto@w3.org
There are a few places in the spec which octet strings are used either as input
or as output, however not all bits in the string are relevant.
For instance:
* Importing an HMAC key were the length is not a multiple of 8 bits
* Exporting an HMAC key whose length is not a multiple of 8 bits
* Deriving bits for ECDH, using a length that is not a multiple of 8 bits
The spec is ambiguous on how exactly that mechanism works. This could lead to
implementation incompatibilities if users rely on the behavior chosen by a
particular implementation.
For instance consider these scenarios:
* Import an HMAC key using data = [0xff] and length=1 bit. When exporting
that key, implementations could return any of the following key values:
[0xff] (the exact octet stream imported)
[0x80] (the unused bits having been zeroed out)
[0x84] (or any other combination where first bit is zero)
* When importing an HMAC key and the unused bits are not zero, we could
consider treating this as an error error to catch potential mis-use?
* When deriving 12 bits for ECDH, it is natural for an implementation to
return the same thing as if deriving 16 bits. However there is nothing in the
spec that mandates this. If another implementation decided to zero out those
last 4 bits however users became reliant on the other behavior...
My recommendation is to mandate that unused bits when returning an octet string
should be set to zero.
--
You are receiving this mail because:
You are on the CC list for the bug.
Received on Friday, 21 November 2014 19:53:15 UTC