Re: New "Goals" (use-cases) - Is your use-case there, accurately described?

I'm interested in providing a toolkit to let people build, among other
things, web applications where I can trust the web application
operator to behave in a "honest but coercible" fashion.

Examples:

 - Imagine if web mail had a couple more buttons: [Find/Import Public
Key], [X] Encrypt Message.  Assume the issue of "Trust" is solved
elsewhere and I import public keys I trust for my contacts, and assume
the web mail publishes the javascript source in an auditable format,
and it is audited and considered trusted.  I can encrypt emails and
attachments to my contacts inside of a Web UI, and when they are
received by the web mail operated, they cannot decrypt them.
 - OTR in a web messaging platform (gchat, facebook)
 - Encrypted passages in web applications, distributed ala
alt.anonymous.messages (that is, broadcast to all) but readable by
only a select few.  (Facebook status messages)
 - Facebook or Gmail could look at the TLS certificate in use on the
connection, and not expose private data if it's being middled [1]
 - A website is nervous about state-sponsored TLS interception, so its
users generate client certificates, and the website sends sensitive
data to the client to be decrypted by the client certificate [2]
 - Zed Shaw wants to get http://vulnarb.com/ going (where you encrypt
data to the public key provided for a TLS connection)
 - I want to build a pure-javascript browser plugin (so it works in
FF, Chrome, Greasemonkey, etc) that looks for 'weird' TLS Connections:
MD5-signed certs, low public exponents, debian weak keys, etc)
 - Corporation X wants to see how many users still have key X pinned
that they did accidentally a while ago [4] [5]
 - Bank B wants to put in a stronger verification on high-value
transactions, such that those transactions require a digital signature
 - App A wants to leave some local data in Web Storage or whatever
it's called.  But it doesn't want this to be accessible to everybody
on the machine.  It'd like it to be encrypted using a private key that
needs a passphrase.


What would these need to be feasible?
 - A javascript library (technically an entire runtime, so the
collection of all javascript code) that is signed by a party other
than the author (possibly me), such that non-signed code (xss,
tampered, corrupted, etc) could not execute.
 - OpenPGP (RFC 4880) compatibility.  Not necessarily built it, but a
way to build it out of primitives.  So...
 - Hash Functions (For full compatibility: MD5, SHA-1, RIPE-MD/160,
SHA256, SHA384, SHA512, SHA224)
 - Symmetric Algorithms (For full compatibility: IDEA, DES-EDE, CAST5,
Blowfish, AES 128/192/256, Twofish256, Camellia 128/192/256)
 - Asymmetric Algorithms (For full compatibility: RSA, Elgamal, DSA)
 - Compression Algorithms (For full compatibility: Zip, Zlib, BZip2)
 - A way to encrypt a file chosen in a <input type=file> control prior
to transmitting it to the server
 - A mechanism for a User Agent to see encrypted text, look in your
keystore (either client certificates or other) for a corresponding
private key, decrypt the content automatically (prompting for a
password if necessary), but not expose it to the web application via
javascript. [3]
 - Exposing the Server Certificate (possibly structured, if not we'll
need a bulletproof, signed, X509 library) and Path of the TLS
Connection as javascript objects.
 - Exposing the Client Certificate supplied up through the SSL library
and Web Server to application code [2]
 - Exposing Pinned Public Keys for a host via javascript [4]


[1] I think this one continues to be information-theoretically
impossible without a trusted side channel. But it can be made
practically difficult to bypass by using random checks that are
difficult to anticipate and fake.
[2] This one is a server-side change in addition to client-side.
[3] There's good precedence for this: A resource from a different
origin (stylesheet, image, etc) that results in a 302 redirect does
not expose via javascript where the redirect went to.
[4] http://www.ietf.org/id/draft-ietf-websec-key-pinning-01.txt
[5] This is also a potential toolkit for privacy violations


These are all pretty much stream of consciousness.  I took a couple
steps down the road to building each, but not much more - so there's
definitely problems here.  And the biggest two complications I came
across were: Key Management and Javascript Integrity.

It's just impractical to have an entire signed javascript runtime.  We
need components of it to be built by different people and to use those
components in a partially unsigned runtime.  I think a realistic
scenario is an OpenPGP library built by a third party, some
security-critical javascript (I'm going to call this 'the crypto js'),
and a bunch of UI javascript.  Formally, I'd say the UI javascript
should not be able to ascertain any information about the crypto or
openpgp js.  The crypto js should not be able to ascertain any
information about the openpgp js, but should be able to call it.  The
crypto and openpgp javascript should not be able to ascertain any
information about the browser crypto operations but should be able to
call them.  The UI javascript should not be able to call the browser
crypto operations.  That's stream of consciousness dump, but
basically, the runtime segregation needs a lot of thought

Likewise, Key Management is weird.  What operations can be done
automatically, what require permission, and bunch of other questions.

-tom

Received on Monday, 12 December 2011 12:32:04 UTC