Re: Call for Adoption: Encrypted Content Encoding

> On 1 Dec 2015, at 13:01, Eliot Lear <lear@cisco.com> wrote:
> 
> Hi Cory,
> 
> You're rebutting a rather blunt straw man argument that is not my
> position. There are probably a great many alternatives to issuing a
> blanket 415 Unsupported Media Type.  We needn't get into an exact
> solution now, nor perhaps not even in the working group.  But stating a
> risk as well as mitigations is a normal practice in a specification and
> it should be followed here as well.  That is all I am suggesting.

On that we definitely agree. =)

>> Let me point out one additional thing: for each intermediary/server that wants to remove malware from a transmitted entity, there’s at least one that would happily *add* it. Replacing an entity with a malware-ridden one is not all that difficult when the body is unencrypted, but it is substantially harder when the body is encrypted, because the encryption specified here is relatively resistant to tampering and modification. Even wholesale replacement of the body only works if the key is compromised. It is possible to argue that this is safer from a malware perspective, because now the only risk of malware is from entities possessing the key.
> 
> I'm sympathetic to the draft being adopted, but your argument above is a
> little off.  Today the Internet is full of intermediaries that do not
> add malware, and would not happily add malware.  You yourself are using
> just such an intermediary for your own email.  Even if we assume that
> you meant to say that there is a threat that intermediaries can
> introduce malware, my concern is more that a server will end up serving
> up malware without any knowledge of doing so.  This is a separate threat
> to those who retrieve information.  And you seem to agree because you
> write…

Well, let’s stop for a minute. Your concern is that servers will end up serving malware with no knowledge of doing so: fine. I quite deliberately am minimising this concern because I don’t personally believe it’s that likely, but I grant you that it’s *more likely* than it was prior to this draft. I don’t think it’s *much* more likely, probably fractional percentages, but definitely more likely.

However, you follow-on to write:

> My point is that with this functionality it doesn't take a
> criminal mastermind to come up with some pretty scary scenarios
> involving an 0wn3d publisher or 3 or 300 or 300,000.

And this is valid: it doesn’t take a criminal mastermind. However, that misses the wood for the trees, which is that you can do *exactly that*, today. It’s not harder than it would be with this draft in place: if anything, it may be easier.

Remember, a server can serve whatever the hell it chooses. Do some servers screen their resources for malware? Presumably, I’m sure. But I bet far more don’t. If the server allows user A to upload arbitrary thing X, and serves it to user B, then we should assume that it is vulnerable to distributing malware. Given that there is no 100%-perfect malware detection system, we should therefore assume that *any* system that behaves in this way is potentially at risk. Encrypting those payloads makes it easier to trick a server that actively scans for malware, sure, but the difference is that it potentially eliminates (or restricts) the ability to mount a parallel attack, which is to intercept the thing X at any point in its multiple transmission hops and replace it with malware-ridden thing Y. With this RFC, the only entities capable of attacking me are those that have access to my keys, which (I hope) is going to be fewer than the number of entities that could attack an unencrypted payload in flight or at rest.

The real root of our disagreement, I think, is whether we trust the intermediaries more than the communicating peers (where intermediaries are any machine/network that touched the document, and the communicating peers are those that possess the encryption keys). I do not believe that trusting those intermediaries, either to protect me from malware or to not actively insert malware into my data stream, is a thing I should do on the modern web. For that reason, I don’t really care that a server can’t scan documents for malware before serving them to me, because if that’s my only defence against malware then I’m screwed anyway: it’s only a matter of time before I hit a hostile server or network that deliberately serves malware.

Having malware scans increases defence in depth, sure, but I don’t believe it’s a panacea, and I don’t believe that the need to sacrifice it is a slam-dunk reason to avoid promoting this draft.

Generally speaking, all we can do is remove attack surface for malicious or naive actors to attack users. HTTPS does this by making it harder for intermediaries to attack users. This draft would also make it harder for servers, or HTTPS-terminating network hops, to attack users. In my mind, that’s a net win: with this in place, I only need to defend against actors that have the key I’m using.

Now, Eliot, to be entirely fair to you, your request (to mention this problem in Section 6) is totally reasonable, and I’m in favour of it. I’d also like us to investigate further whether draft-thompson-http-content-signature or something equivalent can be folded into this proposal to help reduce the risk of the attack that I’m concerned about, which you have helpfully suggested. However, Walter has suggested that “THIS DRAFT IS NONSENS” because of this trade-off, and I wanted to make a clear and coherent case of why I believe that is untrue. I remain +1 on the intent of this draft, though I believe we need to flesh out Section 6 substantially.

Cory

Received on Tuesday, 1 December 2015 13:33:48 UTC