Re: Verified Javascript: Proposal

Daniel,

I would also like you to ask to please apply for a W3C account (
https://www.w3.org/accounts/request) and apply to be an Invited Expert in
the group.  Process here:
https://www.w3.org/Consortium/Legal/2007/06-invited-expert

We can't adopt any ideas you propose, and really shouldn't be discussing
them as possible standards, without a contributor IPR agreement from you.

thanks,

Brad Hill (as chair)

On Tue, Apr 25, 2017 at 2:31 PM Brad Hill <hillbrad@gmail.com> wrote:

> I must say, I don't think the threat and deployment model of this is very
> well thought out, with regards to how real web applications need to work.
> It does not address how to isolate an instance of a web application from
> other unsigned code, and putting the contents of resources into a very
> static spot like the HTTPS certificate doesn't scale and doesn't allow the
> appropriate agility necessary for security.  Further, requiring
> transparency proofs in the certificate is a nearly impossible model to
> develop and test under.
>
> I've floated some strawman proposals around this idea previously, based on
> Facebook's desire to build E2E encryption into a web version of Messenger,
> similar to what we do with apps, where a primary goal is to have
> transparency and proof of non-partition for the main application code. (if
> not all resources)
>
> My very rough proposal for provably transparent and non-partitioned apps
> that still work like webapps and don't have huge holes according to the web
> platform security model is roughly the following:
>
> Utilize suborigins as an isolation mechanism.
>
> Define a special suborigin label prefix for which which a resource must
> meet certain conditions to enter, and accept certain conditions upon
> entering.
>
> To enter a labeled suborigin:  The suborigin is identified by a public key
> as a special label prefix.  The primary HTML resource must supply a
> signature over its body and relevant headers using that key.
>
> Upon entering that labeled suborigin: the effective suborigin becomes a
> hash of the public key plus the hash of the bootstrap HTML resource, so
> that it is not same-origin with anything else.
>
> A mandatory CSP policy is applied that prevents eval and inline script and
> sets an object and embed src of 'none', and, upon entering that labeled
> suborigin, all further HTML, script and CSS loaded must be statically SRI
> tagged, recursively, such that the bootstrap resource hash uniquely
> identifies the entire tree of reachable executable content. This can be the
> basis of a binary transparency proof.
>
> In order to maintain continuity and the ability to upgrade the
> application, certain pieces of state, namely local storage / indexed db /
> cookies may be shared among all applications signed with the same key, so
> that getting the latest version of the app that fixes a bug or adds a
> feature in an E2E messaging product doesn't mean you lose your identity and
> all previous messages.
>
> When encountering a new bootstrap HTML hash, the user would be given the
> option to choose whether to trust and execute it if previous state exists
> for that signing key.  User experience TBD, but this is the point at which
> a transparency proof and gossip about partitioning could be checked, if
> desired.
>
> -Brad
>
>
>
> On Tue, Apr 25, 2017 at 9:24 AM Daniel Huigens <d.huigens@gmail.com>
> wrote:
>
>> Hi Jeffrey,
>>
>> We're not trying to put the contents of web applications in the log.
>> We're trying to put *hashes* of the contents of web applications in the
>> log. Those are much smaller.
>>
>> Also, keep in mind that web applications themselves are also incentivized
>> to keep certificates small, since large certificates mean longer load
>> times. So if they have a web application with a thousand files, they might
>> opt to use HCS for just 1 of them (the root html file) and SRI for
>> everything else.
>>
>> Finally, here's a summary of all logged certificates last week [1]. Let's
>> Encrypt alone has issued over 4 million certificates this week. Even if a
>> few hundred web applications start requesting a certificate every hour
>> because of HCS (which Let's Encrypt does not allow, but some CA's do),
>> that's a drop in the bucket.
>>
>> -- Daniel Huigens
>>
>> [1]: https://crt.sh/?cablint=1+week
>>
>> Op 25 apr. 2017 16:53 schreef "Jeffrey Yasskin" <jyasskin@google.com>:
>>
>> The goal of binary transparency for web applications makes sense, but
>> implementing it on top of the Certificate Transparency logs seems like it
>> introduces too many problems to be workable.
>>
>> Have you looked into a dedicated transparency log for applications, using
>> the system in https://github.com/google/trillian#readme? Then we'd need
>> to establish that only files logged to a particular set of log servers
>> could be loaded. A certificate extension might be the right way to do that,
>> since the certificate would only need to be re-issued in order to add log
>> servers, not to change the contents of the site.
>>
>> Putting every Javascript resource from a large application into the log
>> also might introduce too much overhead. We're working on a packaging format
>> at https://github.com/dimich-g/webpackage/, which could reduce the
>> number of files that need to be logged by a couple orders of magnitude.
>>
>> Jeffrey
>>
>>
>> On Mon, Apr 24, 2017 at 3:25 AM, Daniel Huigens <d.huigens@gmail.com>
>> wrote:
>>
>>> Hi webappsec,
>>>
>>> A long while ago, there was some talk on public-webappsec and public-
>>> web-security about verified javascript [2]. Basically, the idea was to
>>> have a Certificate Transparency-like mechanism for javascript code, to
>>> verify that everyone is running the same and intended code, and to give
>>> the public a mechanism to monitor the code that a web app is sending
>>> out.
>>>
>>> We (Airborn OS) had the same idea a while ago, and thought it would be a
>>> good idea to piggy-back on CertTrans. Mozilla has recently also done
>>> that for their Firefox builds, by generating a certificate for a domain
>>> name with a hash in it [3]. For the web, where there already is a
>>> certificate, it seems more straight-forward to include a certificate
>>> extension with the needed hashes in the certificate. Of course, we would
>>> need some cooperation of a Certificate Authority for that (in some
>>> cases, that cooperation might be as simple, technically speaking, as
>>> adding an extension ID to a whitelist, but not always).
>>>
>>> So, I wrote a draft specification to include hashes of expected response
>>> bodies to requests to specific paths in the certificate (e.g. /,
>>> /index.js, /index.css), and a Firefox XUL extension to support checking
>>> the hashes (and we also included some hardcoded hashes to get us
>>> started). However, as you probably know, XUL extensions are now being
>>> phased out, so I would like to finally get something like this into a
>>> spec, and then start convincing browsers, CA's, and web apps to support
>>> it. However, I'm not really sure what the process for creating a
>>> specification is, and I'm also not experienced at writing specs.
>>>
>>> Anyway, please have a look at the first draft [1]. There's also some
>>> more information there about what/why/how. All feedback welcome. The
>>> working name is "HTTPS Content Signing", but it may make more sense to
>>> name it something analogous to Subresource Integrity... HTTPS Resource
>>> Integrity? Although that could also cause confusion.
>>>
>>> -- Daniel Huigens
>>>
>>>
>>> [1]: https://github.com/twiss/hcs
>>> [2]:
>>> https://lists.w3.org/Archives/Public/public-web-security/2014Sep/0006.html
>>> [3]: https://wiki.mozilla.org/Security/Binary_Transparency
>>>
>>>
>>

Received on Tuesday, 25 April 2017 21:35:27 UTC