Re: Extractability

On Mon, Aug 12, 2013 at 8:25 AM, Mark Watson <watsonm@netflix.com> wrote:
>
>
> On Tue, Aug 6, 2013 at 11:41 PM, Ryan Sleevi <sleevi@google.com> wrote:
>>
>> On Thu, Aug 1, 2013 at 12:23 AM, Mark Watson <watsonm@netflix.com> wrote:
>> > I'll reply more generally later, but quickly regarding the relationship
>> > of
>> > our use-case to TLS: we have designed and deployed a secure application
>> > protocol using WebCrypto which meets our particular application security
>> > needs when running over plain http. This includes confidentiality and
>> > mutual
>> > authentication. Such protocols are presumably exactly the kind of thing
>> > WebCrypto is intended to be used for and so not in any sense
>> > unreasonable.
>> > It's also not "technically unrealistic" since we have done it.
>> >
>> > Clearly, when scripts are delivered over HTTP the level of trust that
>> > one
>> > might have in the Javascript will be substantially less than one might
>> > have
>> > in the browser. Nevertheless, even when HTTPS is used it's obvious that
>> > script and browser are different things, subject to different attacks by
>> > different actors. It's obvious that they "might" be subject to different
>> > levels of trust. But whether they *are* subject to such different levels
>> > of
>> > trust, and the implications of this, are highly application dependent.
>> >
>> > Our requirement to work without TLS arises from the need to access
>> > content
>> > at HTTP URLs as explained in my earlier mail.
>>
>> While non-normative, we've included the following text in the Security
>> Considerations since the 2013-01-08 WG Working Draft (
>> http://www.w3.org/TR/2013/WD-WebCryptoAPI-20130108/#security-developers
>> )
>>
>> "While this API provides important functionality for the development
>> of secure applications, it does not try to address all of the issues
>> that may arise from the web security model. As such, application
>> developers must take care to ensure against common attacks such as
>> script injection by making use of appropriate security functionality
>> such as Content Security Policy and the use of TLS."
>>
>> I suspect that where we differ is a fundamental difference on where
>> the security boundary exists. Speaking as an implementor, our goal has
>> always been to enable "Web Applications" to take advantage of this.
>> That is, the security boundary inherently includes the JavaScript
>> being executed.
>>
>> If I can correctly distill your example, your goal is to authenticate
>> the User Agent itself (which seems consistent with your support for
>> pre-provisioned, origin-specific named keys), without necessarily
>> concerning yourself about the JavaScript being executed.
>>
>> Presumably, the UA uses some out-of-bands mechanism to authenticate
>> the JS, because you simply cannot argue there is confidentiality and
>> mutual authentication *of* the JS without first establishing the
>> provenance of the JS, which you cannot use WebCrypto to do.
>>
>> Is this a correct summary of your implementation?
>
>
> Not exactly. I've explained why we don't use HTTPS to deliver our scripts.
> Obviously we would if we could.
>
> That makes MITM attacks easier, but it's not an essential component of my
> argument that content and UA code are different and may be trusted
> differently.

Right, and as stated, your goal is to authenticate the UA, regardless
of the content script. How else can you make statements about the
differing level in trust, particularly when you have no reason to
trust the JS, short of authenticating the UA?

>
> You are proposing that we make a blanket assumption that content and UA code
> are equally trusted in all applications by all parties. Clearly with respect
> to the user this is not the case, otherwise why wouldn't content code have
> direct access to all the OS APIs that UA code does ? And in our application
> it is also not the case with respect to the service.

You're comparing Apples and Oranges here. That the user trusts the
browser vendor, but doesn't (and shouldn't) trust the random web is an
entirely different discussion about whether a *site operator* trusts a
particular user agent, but not the network.

The requirement I'm proposing is that site operators that wish to make
statements about *the network* must take reasonable action to protect
the network, otherwise they can make no assumptions. Reasonable
protections would include use of TLS or CSP. Establishing that the UA
is "trusted" does nothing for making statements about "the network" -
nor is the goal of this API to encourage or promote itself as an
alternative to TLS. If you'll recall, this was a frequent point of
discussion, especially in addressing JavaScript Crypto opponents, who
feared (rightfully so) that someone would try to use this API to
replace TLS.

That's not to say there isn't a use case here, just that the threat
model motivating that use case isn't one that fits within the overall
threat model of the web, so why is this API special in needing to
address it?

>
> Previously, in response to Virginie, you wrote:
>
>> We're certainly agreed that extractability is a desirable feature, and
>> has been since the beginning. We also agree that supporting key
>> wrap/unwrap are desirable features, and being able to define the
>> extractability of keys that are wrapped/unwrapped are likewise
>> entirely reasonable.
>
> This does not seem consistent with your objection to the intent of our
> proposal: if "extractability" is reasonable then there is presumably some
> good reason not to expose some given keying material to the Javascript code.
> If the JS code is just as trusted as the UA code then why can't we simply
> trust it not to use the export method on that key ? There is no reason for
> the UA to police that through the extractable attribute.

You're misrepresenting the position.

The position is not that "JS is as trusted as the UA" - it's that
"what the UA provides to JS can only be as secure as the JS". Your
weakest link will always be the JS, and you cannot eliminate the need
for strong transport security for the effective delivery of that JS,
and for strong site policies to prevent arbitrary injection of JS.

>
> No, the only reason we would need the UA to police the use of the exportKey
> method is if there is some reason to have less trust in the content code
> than the UA code.
>
> TOFU assumes a model in which the JS is trusted to correctly set the
> extractability attribute on "First Use". It is protecting itself against
> compromise of the JS on future uses. You could argue that the JS could
> likewise be trusted to correctly set the extractability attribute of
> unwrapped keys when they are unwrapped, protecting itself against compromise
> of the JS on future uses.
>
> The point of TOFU, though, is that "First Use" is rare. In our application,
> at least, unwrapping of new session keys is common. We would like to
> minimize the "First Use" occasions by propagating the non-extractability of
> the unwrapping key to the unwrapped keys without triggering another "First
> Use" occasion. If this is unreasonable then the whole idea of the UA
> policing extractability is unreasonable too.

I would, am, and have been arguing exactly that - the model of TOFU is
that if you trust the JS to do *anything* cryptographic, then it's no
different from trusting it to set the *secure* attribute - thus
supporting "Trust on First Use". The same trust you must apply when
you expect a key generation to be done (and set the non-extractable
attribute) is the same trust you must apply when wrapping/unwrapping.

The only reason to have UA-enforced, opaque handling of
non-extractability is if you do not trust the JS, which you seem to
agree with. And if you don't trust the JS, then you don't trust the
JS, and it's already game over - crypto or not.

>
> I don't have a fixed position on how this non-extractability is propagated.
> Nor is it necessary for this to be supported for all key formats. Respecting
> attributes in the key format is one way. I outlined some others in another
> mail, ranging from comprehensive "unwrapped key properties" associated with
> wrapping keys to somewhat kludgy default behaviors. But we do need something
> that works without pre-provisioned keys.
>
> ...Mark
>
>
>
>
>>
>> >
>> > ...Mark
>> >
>> >
>> > On Tue, Jul 30, 2013 at 8:50 PM, Ryan Sleevi <sleevi@google.com> wrote:
>> >>
>> >> Virginie,
>> >>
>> >> I apologize, but I'm not sure I entirely follow your response.
>> >>
>> >> We're certainly agreed that extractability is a desirable feature, and
>> >> has been since the beginning. We also agree that supporting key
>> >> wrap/unwrap are desirable features, and being able to define the
>> >> extractability of keys that are wrapped/unwrapped are likewise
>> >> entirely reasonable. I've only seen one member of the WG suggest
>> >> otherwise, and only when building an unrealistic strawman argument.
>> >>
>> >> However, your points on the trust model are a bit confusing for me.
>> >> The core of the technical discussion here has been precisely upon what
>> >> is an acceptable trust model - whether or not the existing web trust
>> >> model that exists for all other Web/JS APIs is sufficient, or whether
>> >> it's our responsibility in the WG to attempt to define something
>> >> beyond that.
>> >>
>> >> The *key* question for this discussion is whether or not the API can
>> >> or should presume 'trust' in the validity and veracity of the script
>> >> being executed. All other Web APIs presume exactly that. WebAppSec/Web
>> >> Security provide the tools and techniques to establish that further -
>> >> eg: HTTPS, CORS, CSP, etc. Are you suggesting this is not the case -
>> >> and that we must face some additional presumptions, or that we cannot
>> >> rely on the vast body of work that came before this WG?
>> >>
>> >> I again position that there is absolutely no value in the API - either
>> >> the inputs or the outputs - if you don't have some degree of assurance
>> >> that the script is executing. I have argued, from the very beginning,
>> >> that attempts to build an alternative to TLS *on the general web*
>> >> through the use of this API are both unreasonable requirements and
>> >> technically unrealistic, given the very web security model defined and
>> >> expanded upon by WebAppSec (and, to a lesser degree, DAP/SysApps).
>> >> With respect to the SysApps model, I acknowledge it's a very different
>> >> threat model - one that provides even *more* guarantee about the
>> >> veracity of the script being executed, ergo the concerns raised by
>> >> Mark are arguably even less pressing.
>> >>
>> >> This is not a position that it somehow specific to the Netflix
>> >> proposal - I've expressed this position for other cases (eg:
>> >> http://lists.w3.org/Archives/Public/public-webcrypto/2013Apr/0088.html
>> >> ), and we've seen a similar position by Arun when editing the use
>> >> cases and considering the 'threat model' of what was colloquially
>> >> termed the "Facebook Use case"
>> >>
>> >> I'm hoping you can clarify as to what you mean when you say "This is
>> >> not the job of the web crypto WG do define this trust model" - since
>> >> this trust model so clearly affects and influences the requirements
>> >> and design.
>> >>
>> >> Cheers,
>> >> Ryan
>> >>
>> >> On Mon, Jul 29, 2013 at 1:35 AM, GALINDO Virginie
>> >> <Virginie.GALINDO@gemalto.com> wrote:
>> >> > Hello Mark, Ryan, and all,
>> >> >
>> >> >
>> >> >
>> >> > I need obviously to get into the details of your mail exchanges, but
>> >> > in
>> >> > order that we focus on the right tasks in the working group, I would
>> >> > like to
>> >> > sumup my understanding of the situation :
>> >> >
>> >> > -          The extractability attribute is something the WG would
>> >> > like
>> >> > to
>> >> > have,
>> >> >
>> >> > -          The trust models between javascript and browser are
>> >> > different
>> >> > –
>> >> > even if in the end both of them are breakable, efforts you involve to
>> >> > break
>> >> > it are different,
>> >> >
>> >> > -          This is not the job of the web crypto WG do define this
>> >> > trust
>> >> > model, as we mentioned that security model would be a general work
>> >> > treated
>> >> > in collaboration with WebAppSec and Web Security IG.
>> >> >
>> >> >
>> >> >
>> >> > Note that if we fail to manage the extractability in border cases
>> >> > (such
>> >> > as
>> >> > wrap/unwrap/import/export) then we can think either about dropping
>> >> > the
>> >> > extractability or dropping the border case functions. In both cases,
>> >> > our
>> >> > deliverable will miss the opportunity to be really valuable and
>> >> > answer a
>> >> > real market demand.
>> >> >
>> >> >
>> >> >
>> >> > So lets work on a technical solution, not to have this happening.
>> >> >
>> >> >
>> >> >
>> >> > Regards,
>> >> >
>> >> > Virginie
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > From: Mark Watson [mailto:watsonm@netflix.com]
>> >> > Sent: samedi 27 juillet 2013 10:56
>> >> > To: Ryan Sleevi
>> >> > Cc: Harry Halpin; GALINDO Virginie; public-webcrypto@w3.org
>> >> > Subject: Extractability
>> >> >
>> >> >
>> >> >
>> >> > All,
>> >> >
>> >> >
>> >> >
>> >> > I changed the subject of this thread, because Ryan is raising again
>> >> > the
>> >> > question of whether the extractable attribute makes sense at all. Or,
>> >> > rather, the more general question of whether use-cases where the
>> >> > Javascript
>> >> > is less trusted by the service than the UA is in scope for our work.
>> >> >
>> >> >
>> >> >
>> >> > I believe we should decide on this more general question and then
>> >> > consider
>> >> > the implications for extractable and for wrap/unwrap.
>> >> >
>> >> >
>> >> >
>> >> > On that question, I don't believe it is our job in WebCrypto to
>> >> > perform
>> >> > detailed application security analysis. We are providing tools to
>> >> > application security engineers and the tools we provide are based on
>> >> > use-cases.
>> >> >
>> >> >
>> >> >
>> >> > For our use-case, we cannot use HTTPS to deliver our page because we
>> >> > have to
>> >> > access non-SSL resources such as CDN content. Switching CDNs to SSL
>> >> > is
>> >> > expensive in terms of cost, computing resources, and network
>> >> > overhead.
>> >> > And,
>> >> > we no longer want to use SSL because our target devices do not and
>> >> > cannot
>> >> > get accurate time. (There are a number of CE devices that wish to use
>> >> > the
>> >> > HTML5 solution -- the standard has to consider them and not just
>> >> > desktop
>> >> > browsers.). In fact our whole reason for using WebCrypto is to build
>> >> > our
>> >> > own
>> >> > secure application protocol to use instead of SSL.
>> >> >
>> >> >
>> >> >
>> >> > As a result, MITM attacks against the Javascript are relatively easy.
>> >> >
>> >> >
>> >> >
>> >> > Further, in the Netflix case then there is value in attacking the
>> >> > crypto
>> >> > to
>> >> > extract the keying material because that allows you to bypass Netflix
>> >> > service restrictions or deny service to the legitimate user. If
>> >> > keying
>> >> > material is known to be non-extractable (modulo TOFU), then we can
>> >> > still
>> >> > be
>> >> > assured that it is the same browser we are talking to (or at least a
>> >> > browser
>> >> > to which the same user has migrated the keys, if they are
>> >> > sophisticated
>> >> > enough to do that.)
>> >> >
>> >> >
>> >> >
>> >> > ...Mark
>> >> >
>> >> >
>> >> >
>> >> > On Thu, Jul 25, 2013 at 9:42 PM, Ryan Sleevi <sleevi@google.com>
>> >> > wrote:
>> >> >
>> >> > On Thu, Jul 18, 2013 at 7:30 PM, Mark Watson <watsonm@netflix.com>
>> >> > wrote:
>> >> >>
>> >> >>
>> >> >> On Thu, Jul 18, 2013 at 7:09 PM, Ryan Sleevi <sleevi@google.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> On Mon, Jul 8, 2013 at 5:12 PM, Mark Watson <watsonm@netflix.com>
>> >> >>> wrote:
>> >> >>> > Returning to the subject of the original post, and to start off
>> >> >>> > the
>> >> >>> > discussion.
>> >> >>> >
>> >> >>> > Ryan has mentioned two other possibilities for solving this
>> >> >>> > problem,
>> >> >>> > so
>> >> >>> > I'd
>> >> >>> > like to take a moment to describe my understanding of those.
>> >> >>> >
>> >> >>> > (1) Implicit unwrap semantics in pre-provisioned keys
>> >> >>> >
>> >> >>> > A pre-provisioned key with usage unwrap could be imbued with
>> >> >>> > behaviors
>> >> >>> > that
>> >> >>> > dictate the extractable and usage attributes of keys that it
>> >> >>> > unwraps
>> >> >>> > or
>> >> >>> > even
>> >> >>> > that imbue the unwrapped keys with other such properties. The
>> >> >>> > former
>> >> >>> > would
>> >> >>> > be sufficient for "single step" key wrapping, where the final key
>> >> >>> > to
>> >> >>> > be
>> >> >>> > used
>> >> >>> > for encryption, decryption, signature or signature verification
>> >> >>> > is
>> >> >>> > wrapped
>> >> >>> > directly with the pre-provisioned key. The special property of
>> >> >>> > the
>> >> >>> > pre-provisioned key ensures that the final key has extractable =
>> >> >>> > false.
>> >> >>> >
>> >> >>> > If you want to have two steps, for example the key you are
>> >> >>> > transferring
>> >> >>> > is
>> >> >>> > encrypted using a temporary Content Encryption Key (as in JWE)
>> >> >>> > and
>> >> >>> > then
>> >> >>> > this
>> >> >>> > CEK is wrapped using the pre-provisioned key, then you not only
>> >> >>> > need
>> >> >>> > the
>> >> >>> > pre-provisioned key to force extractable = false and usage =
>> >> >>> > unwrap
>> >> >>> > on
>> >> >>> > the
>> >> >>> > CEK, but it must also transfer a special property to the CEK, so
>> >> >>> > that
>> >> >>> > when
>> >> >>> > this in turn is used for unwrapping the resultant key always has
>> >> >>> > extractable
>> >> >>> > = false.
>> >> >>>
>> >> >>> Correct. The "Named Pre-provisioned keys" is already imbued with
>> >> >>> special properties by definition, so this is consistent.
>> >> >>>
>> >> >>> JWK is not unique in this 'two step' form - consider multi-party
>> >> >>> RSA-KEM - you have the RSA key, the derived per-party KEK, and the
>> >> >>> shared, protected key.
>> >> >>>
>> >> >>> >
>> >> >>> > (2) Explicit attributes on wrapping keys
>> >> >>> >
>> >> >>> > A key with usage "unwrap" also has properties which dictate the
>> >> >>> > attributes
>> >> >>> > of keys that it unwraps. Let's call these properties
>> >> >>> > "unwrap-extractable"
>> >> >>> > and "unwrap-usages". Whenever a key, W, is used to perform an
>> >> >>> > unwrap
>> >> >>> > operation, the unwrapped key, K, gets it's attributes set as
>> >> >>> > follows:
>> >> >>> >
>> >> >>> > K.extractable = W.unwrap-extractable
>> >> >>> > K.usages = W.unwrap-usages
>> >> >>> >
>> >> >>> > Again, this is sufficient for single-step unwrapping. When the
>> >> >>> > wrapping
>> >> >>> > key
>> >> >>> > W is generated, the unwrap-extractable and unwrap-usages
>> >> >>> > properties
>> >> >>> > are
>> >> >>> > set
>> >> >>> > to 'false' and the intended usages of the expected wrapped key,
>> >> >>> > respectively, When it comes to unwrapping the unwrapped key, K,
>> >> >>> > gets
>> >> >>> > the
>> >> >>> > appropriate properties.
>> >> >>>
>> >> >>> Correct.
>> >> >>>
>> >> >>> This matches PKCS#11's CKA_WRAP_TEMPLATE and CKA_UNWRAP_TEMPLATE
>> >> >>> properties, for which the smart card and secure element industry
>> >> >>> have
>> >> >>> long since embraced as sufficient for a variety of high-security
>> >> >>> needs
>> >> >>> (eg: eID cards, as a number of members have pointed out)
>> >> >>>
>> >> >>> >
>> >> >>> > However, if the intended usage of the key K is also for
>> >> >>> > unwrapping
>> >> >>> > (as
>> >> >>> > in
>> >> >>> > the two-step key wrapping described above), we need a way to set
>> >> >>> > K.unwrap-extractable and K.unwrap-usages.
>> >> >>> >
>> >> >>> > Theoretically, we could go down the path of having
>> >> >>> > unwrap-extractable
>> >> >>> > and
>> >> >>> > unwrap-usages each be an array, popping the first value on each
>> >> >>> > unwrap
>> >> >>> > operation, i.e.
>> >> >>> >
>> >> >>> > K.extractable = W.unwrap-extractable[ 0 ]
>> >> >>> > K.usages = W.unwrap-usages[ 0 ]
>> >> >>> > K.unwrap-extractable = W.unwrap-extractable[ 1 : ]
>> >> >>> > K.unwrap-usages = W.unwrap-usages[ 1 : ]
>> >> >>> >
>> >> >>> > (using python-like slice notation)
>> >> >>> >
>> >> >>> > It may not be necessary to explicitly expose these attributes on
>> >> >>> > the
>> >> >>> > Key
>> >> >>> > object: it may be sufficient to have them settable at key
>> >> >>> > creation
>> >> >>> > time.
>> >> >>> >
>> >> >>> > The other option is to have the extractable and usage attributes
>> >> >>> > carried
>> >> >>> > securely with the wrapped key, as I have proposed.
>> >> >>>
>> >> >>> Note: This solution ONLY works with JWE-protected-JWK keys - it
>> >> >>> does
>> >> >>> not and cannot work with 'raw' or 'pkcs8'/spki. The smart card /
>> >> >>> HSM /
>> >> >>> SE industry certainly seems to recognize that mixing/matching as
>> >> >>> you
>> >> >>> propose only really works in an implementation-specific manner -
>> >> >>> see
>> >> >>> the CKM_SEAL_KEY proposal in the OASIS TC to see how the very
>> >> >>> nature
>> >> >>> of 'opaque' key blobs is left up to implementations because of
>> >> >>> this.
>> >> >>>
>> >> >>> You missed the third option though - which is that the (JavaScript)
>> >> >>> caller specifies the policy.
>> >> >>
>> >> >>
>> >> >> As you explain below, that's not an option that maintains the
>> >> >> extractability
>> >> >> functionality. In this mail, I was exploring options which do that.
>> >> >>
>> >> >>>
>> >> >>>
>> >> >>> If I can sum up the discussion so far, the two objections against
>> >> >>> this
>> >> >>> last point (eg: what is currently specified) are:
>> >> >>> 1) It allows end-users to manipulate variables (eg: in the
>> >> >>> Javascript
>> >> >>> console) to circumvent this
>> >> >>> 2) In the event of an XSS, an attacker can unwrap a key and set
>> >> >>> extractable to false.
>> >> >>>   2.1) The first attack requires the attacker has previously
>> >> >>> observed
>> >> >>> a wrapped key in transit (eg: MITM) before an XSS, then later XSSes
>> >> >>> and replays the original key with 'extractable' as true.
>> >> >>>   2.2) The second attack requires the attacker have XSSed the site,
>> >> >>> the server send a wrapped key, and the XSS change 'extractable' to
>> >> >>> true.
>> >> >>>
>> >> >>> I see #1 as an explicit non-goal for a general web spec - it's a
>> >> >>> feature, not a bug.
>> >> >>
>> >> >>
>> >> >> I don't see it as consistent with the existing extractable attribute
>> >> >> though.
>> >> >> We should be consistent. Following your approach, we should remove
>> >> >> the
>> >> >> extractable attribute (not that I am proposing this).
>> >> >>
>> >> >>>
>> >> >>> #2.1 can (and should) be mitigated via HTTPS and related.
>> >> >>> #2.2 can (and should) be mitigated via CSP and related.
>> >> >>
>> >> >>
>> >> >> There are many ways in which the Javascript running on the users
>> >> >> machine
>> >> >> may
>> >> >> not be the Javascript that either the user or the service provider
>> >> >> expects.
>> >> >
>> >> > If you think that this is relevant to the threat model, you
>> >> > absolutely
>> >> > need to provide an expansion on this.
>> >> >
>> >> > If you're suggesting the UA defend against "malware", then that's a
>> >> > non-starter. If you're talking about extensions or other such, then
>> >> > either the user was informed and consented, or it's malware. I don't
>> >> > see how you can arrive in a situation where neither party has
>> >> > authorized something AND that being a situation that we as a WG must
>> >> > deal with.
>> >> >
>> >> >
>> >> >> The extractability attribute provides some protection against such
>> >> >> scripts
>> >> >> obtaining the raw keying material once it has been installed,
>> >> >> provided
>> >> >> the
>> >> >> browser itself is not compromised. We're not in a position to do
>> >> >> security
>> >> >> engineering for every possible application here, we're providing
>> >> >> tools
>> >> >> and
>> >> >> extractability is a useful one.
>> >> >>
>> >> >> Given the above, it's completely reasonable to want to maintain this
>> >> >> property with wrapped keys.
>> >> >
>> >> > Again, if this is the malware case, it's completely unreasonable to
>> >> > want to maintain this property.
>> >> >
>> >> >
>> >> >>
>> >> >>>
>> >> >>>
>> >> >>> Finally, the Structured Clonability of Key objects permits other
>> >> >>> creative uses that have strong parallels to existing software such
>> >> >>> as
>> >> >>> middleware, for example, by having a 'trusted' origin perform the
>> >> >>> unwrapping, and then postMessaging() to the untrusted origin
>> >> >>> (which,
>> >> >>> for example, may not be able to support strict CSP policies), while
>> >> >>> still preserving attributes.
>> >> >>
>> >> >>
>> >> >> Sure, but you are making a bunch of assumptions or imposing a bunch
>> >> >> of
>> >> >> constraints on how applications are designed. What I can say is that
>> >> >> for
>> >> >> our
>> >> >> application, this wouldn't work. Our security analysis suggests that
>> >> >> we
>> >> >> should in all cases attach a different level of trust to the
>> >> >> Javascript
>> >> >> code
>> >> >> than we do to the browser code. Both can be compromised, of course,
>> >> >> but
>> >> >> the
>> >> >> ways in which the Javascript can be attacked are more numerous and
>> >> >> varied.
>> >> >>
>> >> >> ...Mark
>> >> >>
>> >> >
>> >> > Naturally, I strongly disagree with this as being a reasonable goal
>> >> > for
>> >> > the
>> >> > API.
>> >> >
>> >> > It is, in my view, unreasonable to simultaneously suggest you 'trust'
>> >> > JS to perform crypto but then don't trust the JS performing the
>> >> > crypto. As we discussed from the very beginning, the mere act of
>> >> > permitting cryptographic operations is often more than sufficient to
>> >> > leverage any number of attacks - the formal analysis of PKCS#11 we
>> >> > discussed in our first F2F was very much a demonstration of this and
>> >> > why such a goal is unreasonable for any generic API.
>> >> >
>> >> > You're right, it absolutely makes a statement that "If you're going
>> >> > to
>> >> > run code, you trust the code you're going to run" - and using *other*
>> >> > mechanisms to improve or augment that trust (eg: extensions/sysapps,
>> >> > as we've also discussed extensively).
>> >> >
>> >> > As has also been discussed at length, if you're assuming a MITM that
>> >> > can modify JS, then it's entirely reasonable to assume that if
>> >> > they're
>> >> > not attacking the crypto, they're attacking any number of other
>> >> > aspects - including stripping out the crypto entirely. It's a never
>> >> > ending game of whack-a-mole that benefits no one, compared to
>> >> > actually
>> >> > dealing with the trust problem where it belongs - with the JS itself.
>> >> >
>> >> >
>> >
>> >
>
>

Received on Tuesday, 13 August 2013 00:35:01 UTC