RE: Why use DER rather than BER?

 

 

From: Ryan Sleevi [mailto:sleevi@google.com] 
Sent: Wednesday, March 30, 2016 10:28 PM
To: Jim Schaad <ietf@augustcellars.com>
Cc: public-webcrypto@w3.org
Subject: Re: Why use DER rather than BER?

 

 

 

On Wed, Mar 30, 2016 at 5:55 PM, Jim Schaad <ietf@augustcellars.com <mailto:ietf@augustcellars.com> > wrote:

I was not around when this decision was made, but I am curious why it was
decided that we should do DER encoding and decoding rather than the more
natural BER.  Since DER is a proper subset of BER, the statement that you
need to have both a DER and a BER decoder seems to be wrong.  A BER decoder
would successfully decode DER with no problems.  I do not know of any
security reasons why the document should prefer DER to BER.  It is not like
we care that there is a single encoding for a specific value or not.  We are
not signing the output at all.  The current requirement of having DER
decoders means that there are going to be some private keys that were
exported from a random source that will not import into the WebCrypto world
since they are BER encoded (as per the spec) and thus cannot be successfully
parsed.

I don't necessarily want to change the decisions, but it would seem that
much of Ryan's objections disappear if we allow for BER decoders.  This
would not make any requirements on encoders - they could still be required
to be DER if it was desired.

Jim

 

I... struggle a lot in how best to reply to this mail, Jim, because I would hope it would be understandably obvious why one would be preferable, especially given your involvement in the security space for as long as you have been. If the tone in this email comes off wrong, I'd like to premptively apologize, but I an admittedly quite boggled as to how the question came to be.

 

You're correct that a BER decoder can (or should) be able to decode DER, although the converse is not true - a DER decoder cannot decode BER.

 

As such, to support BER, it would either be required that an implementation contains:

1) A BER encoder and BER decoder

2) A DER encoder and a BER decoder

 

Now, I fundamentally disagree with the acceptance that "there are going to be some private keys that were exported from a random source that will not import into the WebCrypto world" as being a valid use case. There are many awful encoders, as you mentioned in respect to certificates, and from a Chrome implementation side, there is absolutely zero interest or plans to accept that as a justification to design a new system with bugs and duct-tape. We (Chrome) are actively working to remove the acceptance of such certificates and keys in other spaces. As it relates to other systems, quite simply, my (personal) view is that if you have such systems, they should adapt or perish. You don't get to use WebCrypto with your awful, broken encoder. This is fundamentally no different than the decision not to support MD5 or 3DES in the API - just because you have a legacy system that expects MD5/3DES is not sufficient justification in-and-of-itself to support, and just because you have a broken system spitting out improperly encoded (or BER encoded, which I'm building to, I promise) doesn't mean that it's a justification in-and-of-itself to support. As mentioned on the other thread, if you need to massage your data, you can use Javascript on the client side or filter it on the server side before sending. This position, I'll note, is wholly aligned with the Extensible Web Manifesto - we give the building blocks, but there's no intrinsic reason why the browser needs to handle "Awful Legacy X", especially if you can polyfill (securely) in JS.

 

[JLS] Given that the specification for private keys states that you can BER encode them, I can produce private keys from my third party library which are not using a broken encoder and will not be accepted.  Given that there is no indication that the item is BER encoded in an error message, I guess some people are going to have to do the decode/encode thing in JavaScript all of the time just to make sure they don’t have a problem. 

 

But that's a bit of a side-show (though related) to your question of "Why not accept BER decoders". The simple answer is because BER is abyssmal. It is a pain to implement and I would go as far to say as it CANNOT safely be done while adhering to modern software design.

 

Consider that BER allows for indefinite length encodings, for example. One can construct an encoding that does not terminate. You might not understand why that's a concern, given the current BufferSource API, but that's fundamentally inconsistent with any intent to offer a streaming API. This problem is very similar to the notion of "compression bombs," in which some compression algorithms are more or less susceptible too based on the underlying design of the algorithm.

 

[JLS] And you have basically demonstrated why you do not want to support doing authenticated decryption using streaming interfaces as well.  Since, for security reasons, you should not return the plain text until after you have done the authentication.  This means that you have the same type of building up a huge buffer problem that you appear to be complaining about above.  Additionally, I would not expect that any of the import export routines to ever switch to being streamed, but that is beside the point.

 

BER allows you to mix definite and indefinite length encodings - such as creating CONSTRUCTED strings (which are indefinite length) which are made up of multiple substrings of the same type (and which are definite length). This gets especially hairy when you consider BITSTRINGs and the use of unassigned bits - you can easily create a CONSTRUCTED BITSTRINGs with 3, 7, 3, 5 unused bits, and then have to worry even more about alignment and correction. I know neither NSS nor CryptoAPI handle this case - even though it's fully legal in BER.

 

To encode a bitstring value in this way, it is segmented. Each segment shall consist of a series of consecutive

bits of the value, and with the possible exception of the last, shall contain a number of bits which is a multiple of eight.

Each bit in the overall value shall be in precisely one segment, but there shall be no significance placed on the segment

boundaries.

 

No that is illegal BER encoding.

 

While I can't speak for other browsers, I know that NSS would like to move away from exposing BER in any API expect those directly related to S/MIME. As I mentioned in my previous mail, the past 3 NSS CRITICAL severity bugs - bugs that allowed for full remote code execution - were in the BER decoder. These bugs existed for years. If you study the history of msasn1.dll, you will see that Microsoft had a similar decade of hell of RCEs. You can find similar stories in virtually every library that tries to write a compliant BER parser.

 

[JLS] I will have to go back and look at this, I do not remember that you stated that this was the case.  

 

What you end up with, then, as a result, is non-compliant BER parsers. And now we're back to the central problem, which is that difference of behaviour is meaningful and non-interoperable.

 

There is no legitimate reason to support BER. Chrome will not support BER parsing - it is too open-ended, too security-risky, and there is zero utility for a BER parser, in practice, outside of the CMS/SMIME space. If you survey cryptographic libraries, you will find this. This is true for decoding and encoding.

 

To that end, NSS and CryptoAPI are "special snowflakes", and they only have the (limited, and in the case of NSS, unacceptable-for-security) BER parsers they do to support CMS/SMIME, not because of SPKI/PKCS8.

Received on Thursday, 31 March 2016 14:38:30 UTC