Re: Signed Javascript

On 12/17/2013 10:50 PM, Hill, Brad wrote:
> We're appointing Editors on the new sub-resource integrity spec on today's WebAppSec call, and I expect we'll have a strawman available soon in the new year.
>
> For the moment we plan to simply provide a way to identify a remote resource of any type by hash.  To deal with upgrade/fragility issues, a mismatch will cause some policy-defined response which, in addition to failure to load, might include a fallback to an alternate https resource or simply generation of a CSP-style report. (presuming we can find an acceptable position on cross-domain information leakage with such, such as requiring a Access-Control-Allow-Origin header)
>
> We have deliberately avoided for now the idea of "signing" because the questions of identity and authenticity are indeed so thorny.  My feeling is to take this one step at a time.  If we get traction with hashes, we can think about signatures after.

I agree with that take on it, but I'm putting forward signing as a 
problem in the long-term for Web Security.
> -Brad Hill
>
>> -----Original Message-----
>> From: Richard Barnes [mailto:rbarnes@bbn.com]
>> Sent: Tuesday, December 17, 2013 1:39 PM
>> To: Harry Halpin
>> Cc: public-web-security@w3.org
>> Subject: Re: Signed Javascript
>>
>>
>> On Dec 17, 2013, at 4:31 PM, Harry Halpin <hhalpin@w3.org> wrote:
>>
>>> On 12/17/2013 10:28 PM, Richard Barnes wrote:
>>>> On the one hand, this is a "turtles all the way down" problem.  If you're going
>> to verify JS with WebCrypto, you need to have JS to do the verification, and how
>> does that get verified.
>>>> On the other hand, if you do have clean verification JS, it seems like you
>> could do this with JWS / WebCrypto very simply.
>>>> var jws = {
>>>>      "unprotected": { "alg": "RS256", "jwk": { ... } },
>>>>      "payload": "... base64-encoded JavaScript ...",
>>>>      "signature": "..."
>>>> };
>>>>
>>>> var valid = jose.verify(jws);
>>>> /* Check that the key is one you trust */
>>> It's exactly how we determine "what key is one we trust" in that comment
>> where we need work :)
>>
>> Really?  I thought that would just be something like "in a cert that chains to a
>> trust anchor".
>>
>> In any case, if you load your scripts over HTTPS, there's no additional security
>> here.  Either way, you're assured that you got the script from the guy on the
>> other end of the connection.  What WebAppSec is doing (IIRC), is providing the
>> web app loading the library a *countersignature*, which is directly opposed to
>> the idea of upgradeability (since the countersignature won't validate on the
>> upgraded library).
>>
>> --Richard
>>
>>
>>>> eval(atob(jws.payload));
>>>>
>>>>
>>>>
>>>>
>>>> On Dec 17, 2013, at 4:17 PM, Harry Halpin <hhalpin@w3.org> wrote:
>>>>
>>>>> I think some sort of signed Javascript solution could be very useful.
>> Currently, on the Web we have a pretty straightforward same origin policy that
>> assumes complete trust in the server. yet with the proliferation of third-party JS
>> apps and the possibility of server being compromised, how do you know if the
>> server has served the right JS?
>>>>> I think some approach involving signatures and repos of JS libraries (similar
>> to repos in *nix) would help, along with some sort of network perspectives or
>> trust anchor in the browser to double-check and verify the JS served by the
>> server.
>>>>> I believe WebAppSec WG is working on something in this space. I'm
>> personally a fan of the TUF/Thandy approach of Tor, and wonder if such an
>> approach could be adopted to JS. Installing trusted code is a hard problem, and
>> applies just as equally in JS as it does in any other language. Despite all the harm
>> of XSS, the advantage of downloading the JS code (and forcing new code into
>> the cache when necessary), JS does allow easy upgrade to avoid 0 days, but I'd
>> like to see if we can increase the trust in JS even more.
>>>>>    cheers,
>>>>>     harry
>>>>>
>>>>>

Received on Tuesday, 17 December 2013 22:23:33 UTC