Re: Verified Javascript: Proposal

Hi Jochen,

Thanks for the feedback. I think for the kind of web apps that this is
meant for, they will also have a GitHub account with some or all of the
frontend code on there. (Example for which I think this would be a good
fit: [1].) Then, with the public log, you can start to think about
verifying that the code on the server matches the code on GitHub. Then,
you can start to monitor patches on GitHub, and inspect the code more
deeply (including e.g. a paid security audit).

-- Daniel Huigens

[1]: https://github.com/meganz/webclient

2017-04-25 10:33 GMT+02:00 Jochen Eisinger <eisinger@google.com>:
> Hey,
>
> I wonder how the logged certificates would be used. I would expect web apps
> to update several times a day, or even per hour. How would a user tell the
> difference between a bug fix / feature release on the one hand, and
> something malicious (from their PoV) on the other hand?
>
> best
> -jochen
>
> On Mon, Apr 24, 2017 at 12:27 PM Daniel Huigens <d.huigens@gmail.com> wrote:
>>
>> Hi webappsec,
>>
>> A long while ago, there was some talk on public-webappsec and public-
>> web-security about verified javascript [2]. Basically, the idea was to
>> have a Certificate Transparency-like mechanism for javascript code, to
>> verify that everyone is running the same and intended code, and to give
>> the public a mechanism to monitor the code that a web app is sending
>> out.
>>
>> We (Airborn OS) had the same idea a while ago, and thought it would be a
>> good idea to piggy-back on CertTrans. Mozilla has recently also done
>> that for their Firefox builds, by generating a certificate for a domain
>> name with a hash in it [3]. For the web, where there already is a
>> certificate, it seems more straight-forward to include a certificate
>> extension with the needed hashes in the certificate. Of course, we would
>> need some cooperation of a Certificate Authority for that (in some
>> cases, that cooperation might be as simple, technically speaking, as
>> adding an extension ID to a whitelist, but not always).
>>
>> So, I wrote a draft specification to include hashes of expected response
>> bodies to requests to specific paths in the certificate (e.g. /,
>> /index.js, /index.css), and a Firefox XUL extension to support checking
>> the hashes (and we also included some hardcoded hashes to get us
>> started). However, as you probably know, XUL extensions are now being
>> phased out, so I would like to finally get something like this into a
>> spec, and then start convincing browsers, CA's, and web apps to support
>> it. However, I'm not really sure what the process for creating a
>> specification is, and I'm also not experienced at writing specs.
>>
>> Anyway, please have a look at the first draft [1]. There's also some
>> more information there about what/why/how. All feedback welcome. The
>> working name is "HTTPS Content Signing", but it may make more sense to
>> name it something analogous to Subresource Integrity... HTTPS Resource
>> Integrity? Although that could also cause confusion.
>>
>> -- Daniel Huigens
>>
>>
>> [1]: https://github.com/twiss/hcs
>> [2]:
>> https://lists.w3.org/Archives/Public/public-web-security/2014Sep/0006.html
>> [3]: https://wiki.mozilla.org/Security/Binary_Transparency
>>
>

Received on Tuesday, 25 April 2017 09:04:40 UTC