Re: Signed Javascript

On 12/17/2013 10:28 PM, Richard Barnes wrote:
> On the one hand, this is a “turtles all the way down” problem.  If you’re going to verify JS with WebCrypto, you need to have JS to do the verification, and how does that get verified.
>
> On the other hand, if you do have clean verification JS, it seems like you could do this with JWS / WebCrypto very simply.
>
> var jws = {
>      “unprotected”: { “alg”: “RS256”, “jwk”: { ... } },
>      “payload”: “... base64-encoded JavaScript ...”,
>      “signature”: “..."
> };
>
> var valid = jose.verify(jws);
> /* Check that the key is one you trust */

It's exactly how we determine "what key is one we trust" in that comment 
where we need work :)
>
> eval(atob(jws.payload));
>
>
>
>
> On Dec 17, 2013, at 4:17 PM, Harry Halpin <hhalpin@w3.org> wrote:
>
>> I think some sort of signed Javascript solution could be very useful. Currently, on the Web we have a pretty straightforward same origin policy that assumes complete trust in the server. yet with the proliferation of third-party JS apps and the possibility of server being compromised, how do you know if the server has served the right JS?
>>
>> I think some approach involving signatures and repos of JS libraries (similar to repos in *nix) would help, along with some sort of network perspectives or trust anchor in the browser to double-check and verify the JS served by the server.
>>
>> I believe WebAppSec WG is working on something in this space. I'm personally a fan of the TUF/Thandy approach of Tor, and wonder if such an approach could be adopted to JS. Installing trusted code is a hard problem, and applies just as equally in JS as it does in any other language. Despite all the harm of XSS, the advantage of downloading the JS code (and forcing new code into the cache when necessary), JS does allow easy upgrade to avoid 0 days, but I'd like to see if we can increase the trust in JS even more.
>>
>>    cheers,
>>     harry
>>
>>

Received on Tuesday, 17 December 2013 21:31:48 UTC