Re: Protection of decrypted data from malicious servers?

On Tue, Mar 25, 2014 at 2:24 PM, Ryan Sleevi <sleevi@google.com> wrote:

>
> On Tue, Mar 25, 2014 at 1:57 PM, James Marshall <james@jmarshall.com>wrote:
>
>> New here... glad to see work happening on this.  I've wanted client-side
>> encryption for a while.
>>
>> In the current draft, is there any protection against a compromised or
>> malicious server learning decrypted content, e.g. by having JS that
>> decrypts data and then sends that back to the server?  Ideally, client-side
>> encryption can protect data from a malicious server.  For example, I'd like
>> to see a webmail site with full end-to-end encryption, without making us
>> trust the server at all.  CSP helps, but is not a full solution.
>>
>
> No. This is impossible. This is not a valid threat, and not something in
> scope for this WG.
>

Well, fair enough if it's not in scope, but I think it leaves a significant
problem unaddressed.  Is secure webmail impossible then?  It's definitely
something many people want.


You're running executable code you just downloaded from an arbitrary
> server. If you don't trust that server, no amount of machine learning magic
> is going to help you determine whether this code is safe or not.
>

We use all kinds of sandboxes for different kinds of code.  JS itself has
many restrictions for security's sake.  Tainting is already supported in
some languages.



> If you don't want to trust a server to deliver your code, you can use an
> extension/webapp/whatever your UA vendor of choice calls their particular
> version of SysApps ( http://www.w3.org/2012/sysapps/ ). These apps are
> delivered out of bands, using a form of code-signing, and don't require
> trust in any particular server. They do require trust in the entity
> delivering the code, though. Which has been well understood for a Very Long
> Time ( http://cm.bell-labs.com/who/ken/trust.html )
>
> It is quickly turtles all the way down. Can the browser protect against
> the OS? Can the OS protect against the hardware? It's a layering issue, and
> fundamentally, you must trust the server.
>

In the case of open-source (including hardware), I don't believe they
require trust in the entity delivering the code (sorry Ken T.).  I can
hypothetically trust my hardware, OS, and browser if the source code is
audited, and because they live in my home under lock and key.  (Yes, it has
to be audited all the way down, including compiler, microcode, etc.-- not
saying it's easy.)  Auditing every website I might use is a much larger
task.


If this hasn't been addressed, I think of two possible solutions, neither
>> one very good:
>>
>> 1) Use a kind of "taint", where decrypted data and all data derived from
>> it is prevented from being sent back to a server.
>>
>
>> 2) Use HTML to define an element to display decrypted data, without
>> allowing JS to access the content of that element.  Something like
>>
>>     <div id="mysecret" type="encrypted" algorithm="..." ...></div>
>>
>> ... and something similar for an input field that is to be encrypted
>> before JS can access its data.
>>
>> Am I missing something, and has this been addressed?
>>
>> Thanks,
>> James
>>
>>
> This fundamentally does not work with the Web Security Model, and we are
> not attempting to redefine the Web Security Model.
>

OK, not in scope.  Do you know a good link to the Web Security Model you're
referring to (a web search fails me)?  I'd also be interested in how this
fundamentally does not work with it.


There are plenty of secure, robust ways to created this "Don't trust the
> server" solution - using Apps/Extensions - but fundamentally, executing
> code you're downloading from a server requires an element of trust in its
> non-maliciousness.
>

Unless you sandbox it or otherwise restrict its capabilities to safe
operations, as already happens with JS.


If this discussion has already been in another thread, let me know and I'm
happy to review that.

Thanks,
James

Received on Tuesday, 25 March 2014 22:52:53 UTC