Re: Protection of decrypted data from malicious servers?

On Tue, Mar 25, 2014 at 4:12 PM, Ryan Sleevi <sleevi@google.com> wrote:

>
> On Tue, Mar 25, 2014 at 3:52 PM, James Marshall <james@jmarshall.com>wrote:
>
>> On Tue, Mar 25, 2014 at 2:24 PM, Ryan Sleevi <sleevi@google.com> wrote:
>>
>>>
>>> On Tue, Mar 25, 2014 at 1:57 PM, James Marshall <james@jmarshall.com>wrote:
>>>
>>>> New here... glad to see work happening on this.  I've wanted
>>>> client-side encryption for a while.
>>>>
>>>> In the current draft, is there any protection against a compromised or
>>>> malicious server learning decrypted content, e.g. by having JS that
>>>> decrypts data and then sends that back to the server?  Ideally, client-side
>>>> encryption can protect data from a malicious server.  For example, I'd like
>>>> to see a webmail site with full end-to-end encryption, without making us
>>>> trust the server at all.  CSP helps, but is not a full solution.
>>>>
>>>
>>> No. This is impossible. This is not a valid threat, and not something in
>>> scope for this WG.
>>>
>>
>> Well, fair enough if it's not in scope, but I think it leaves a
>> significant problem unaddressed.  Is secure webmail impossible then?  It's
>> definitely something many people want.
>>
>
> And I want usable cryptography... Er, a pony :)
>
> Is "Secure webmail where you don't trust your webmail provider at all but
> use a security program provided by them" impossible? Yes.
>

I've been assuming someone could use a browser (i.e. the security program
in this case) from a different source than the webmail provider.  Also that
key management is on the client side, and the webmail provider never sees
the private keys.



> In the case of open-source (including hardware), I don't believe they
>> require trust in the entity delivering the code (sorry Ken T.).  I can
>> hypothetically trust my hardware, OS, and browser if the source code is
>> audited, and because they live in my home under lock and key.  (Yes, it has
>> to be audited all the way down, including compiler, microcode, etc.-- not
>> saying it's easy.)  Auditing every website I might use is a much larger
>> task.
>>
>
> I don't think we'll be able to have a reasonable debate on this, because
> in the same sentence where you suggest it's entirely reasonable to audit
> your hardware, OS, browser down to the microcode level, you cannot audit
> Javascript for sites you browse.
>
> I don't know how your economies-of-time work there, but there's no way we
> can rationally engage here under that model.
>

I think you're being too dismissive (which is itself irrational).  It's
possible to audit commonly-used tools, but there are many thousands
(millions?) of times more websites than there are commonly-used tools.  I
think it's *much* more possible to audit the tools, especially if having
been audited is considered an important feature when users choose their
tools.

Rationality requires stating ones reasons rather than dismissing what the
other is saying without reason.  These are supposed to be open, informative
discussion groups that help developers and define and promote good
standards.  Maybe I'm wrong in my technical statements, but if so I'm here
to understand why.


 If this hasn't been addressed, I think of two possible solutions, neither
>>>> one very good:
>>>>
>>>> 1) Use a kind of "taint", where decrypted data and all data derived
>>>> from it is prevented from being sent back to a server.
>>>>
>>>
>>>> 2) Use HTML to define an element to display decrypted data, without
>>>> allowing JS to access the content of that element.  Something like
>>>>
>>>>     <div id="mysecret" type="encrypted" algorithm="..." ...></div>
>>>>
>>>> ... and something similar for an input field that is to be encrypted
>>>> before JS can access its data.
>>>>
>>>> Am I missing something, and has this been addressed?
>>>>
>>>> Thanks,
>>>> James
>>>>
>>>>
>>> This fundamentally does not work with the Web Security Model, and we are
>>> not attempting to redefine the Web Security Model.
>>>
>>
>> OK, not in scope.  Do you know a good link to the Web Security Model
>> you're referring to (a web search fails me)?  I'd also be interested in how
>> this fundamentally does not work with it.
>>
>
> Same Origin Policy. Anything that is in the same origin has the same
> privilege level and capabilities. Core concept to the web. If you want
> privilege separation, dropping, capabilities, you communicate across
> origins.
>

Yes, I understand what Same Origin Policy is, but I don't see how my two
suggestions fundamentally do not work with it.  They have other problems,
as I've admitted.


However, it's quite simple, even without understanding anything about web
> security:
>
> You're running code. From someone you don't trust.
> You cannot trust that code from someone you don't trust will do the things
> it's supposed to. It's the halting problem applied to security.
>

But what we're talking about is trusting the code to *not* do what it's not
supposed to do.  That you *can* do.  For a simple example, JS cannot access
a local disk, except in very controlled and restricted ways (e.g. Storage
objects).  No malicious website can get around that.  Similarly, can we
prevent JS from revealing decrypted data to a server?


The web does not sandbox you from yourself (that is, your origin). It lets
> you sandbox yourself from others, it lets you defend against yourself, but
> it doesn't protect your users from you messing up.
>
> Put differently, you cannot combine an untrusted UI and a trusted UI into
> the same origin. You can't. There are a million ways an evil server can get
> you to screw up. The easiest being "Oh hey, I'm gonna encrypt this" - and
> then doesn't. Or phishes you.
>

Agreed, I was talking about a different kind of sandbox, that of what JS
can and cannot do.


Good reading is http://tonyarcieri.com/whats-wrong-with-webcrypto to
> understand
>

Thanks, I'll have a look.

Received on Wednesday, 26 March 2014 00:13:29 UTC