- From: John Lyle <john.lyle@cs.ox.ac.uk>
- Date: Thu, 03 Jan 2013 11:30:15 +0000
- To: public-sysapps@w3.org
On 03/01/13 02:37, Jonas Sicking wrote: >> * The abstract should probably read "This document specifies the runtime and >> security model of _System Level_ Web Applications". > > > Depends on what you mean by "system level". The intent is that the > spec defines the behavior both for built-in applications like the > dialer or contacts, as well as third-party apps that can be developed > by anyone and installed through a store or through a webbrowser. Hi Jonas, Thanks for getting back to me so quickly. I had a few further comments. For brevity I've not quoted all of the last email. I only meant "system level" in the sense that this working group is about "system applications" - something _more_ than a normal web page in a browser. There's a shortage of terminology, perhaps. > In short, given that mostly what we wanted to lift from widgets was a > simple hierarchical name-value storage, it didn't seem to buy us a > whole lot to grab that from widgets. Thanks, that makes sense. >> * Is there a concept of a feature or permission that is optionally required? >> I'm thinking of situations where the device is under a corporate policy, for >> example, and a certain feature or permission can't be granted under any >> circumstance. Can an application express that this feature is a deal >> breaker, or are all 'required' features actually optional? > Simply don't put the feature in "required_features" and do runtime > testing to check if the feature is available. For example to test if > indexedDB is implemented, do > > if ("indexedDB" in window) { ... } Ok, so "require_features" really does mean "required".Thanks for clarifying. I'm in favour of the features / permissions split, it seems sensible. >> * Where is the origin of the application defined by the app? It isn't >> listed in the manifest properties, but referenced frequently. In >> particular, section 3.1 lists the 'origin' as being the origin described in >> the manifest, separately from the installOrigin. > For hosted applications, it's the origin where the application > manifest is located. > > For packaged apps, it's "app://" plus the unique identifier generated > when the packaged app is installed. Yep, the definition of an origin for an application is clear. I was referring more to section 3.1, where the Application interface offers the "origin" attribute which "must return the origin of the application as described in the application manifest". However, the manifest does not specify an origin, as far as I can tell. >> * For packaged apps there are two copies of a manifest: one outside and one >> inside. Which has precedence? What happens when they conflict? You >> mentioned that this is for update / installation purposes - could you >> elaborate? It seems to me that these shouldn't both be called 'manifest' as >> they serve different purposes and have different contents. > I would personally call the file which is outside of the package > something other than simply a manifest. Maybe an update-manifest. The > contents of the manifest outside of the package is intended to be used > *only* for the install and update of apps. So it'll only contain > enough meta-data that it allows the runtime to display an install > dialog and then download the actual app package. As well as download > future updates. Thanks, that makes sense. I think "update-manifest"is a useful term. >> How is the application manifest inside the package >> proof that the update is genuine? Are you assuming that packages are served >> from trusted origins only over TLS/SSL? > It's the responsibility of the store to make sure to use safe > protocols to deliver he update-manifest and the app package. But it's presumably the responsibility of the user (or some other device stakeholder) to choose legitimate stores. It seems reasonable, therefore, to specify a few default rules on the manner in which update-manifests and packages may be served to avoid the user unwittingly using an unreliable and insecure app store. > We've also been using a signing mechanism to sign app packages, though > this isn't yet a generalized mechanism but rather only works for > stores which the app runtime has built-in certificates for. The spec > mentions this briefly, but more detail is definitely needed. The > intent of this signing mechanism is not however to be generic > mechanism to ensure that updates are authentic, but rather a mechanism > to ensure that applications which have access to sensitive APIs are > authentic. It sounds like the signing mechanism is based on distributor signatures rather than author signatures, or are both supported? From section 8.4.3, I take it that the UA must maintain an association between installation origins and the certificates and keys used by this entity to sign app packages. Is there expected to be any relationship between the installation origin's DNS identity, as per the TLS/SSL certificate, and the certificate for app signing keys? To put it another way - are you doing anything similar to the way WAC verifies widget signatures: http://specs.wacapps.net/core/index.html#digital-signatures ? >> * You don't seem to have defined 'hosted' applications, except "not a >> packaged application". Can a system application be a hosted application? >> This is a bit unclear. > Again, depends on your definition of a "system application". To try and clarify: a hosted application is just a set of webpages with an update-manifest? But packaged applications can still load images and other resources, so the distinction between packaged and hosted is that a packaged application has *some* resources within its container, whereas a hosted application is entirely hosted online and has no container. The scope of a hosted application is, therefore, whatever pages are hosted by the origin and can be navigated to by the user. It was mentioned that there are security advantages to packaged apps over hosted apps. Is it expected that hosted apps will be given access to fewer APIs? > >> * Applications are isolated from each other. In some of our (webinos) use >> cases, apps need to be able to communicate with each other in a limited way >> (e.g., over a message channel). Is this forbidden in your proposal? Or can >> one application trigger a System Message to another somehow? > We envision that separate specifications are developed for these > intents. For example our WebActivities proposal allows for limited > cross-application communication. Similar specifications can be > developed to allow more advanced cross-application communication. Ok. > >> * Permissions can be granted automatically, denied automatically, >> granted/denied by the user at install time, or granted/denied by the user at >> runtime. Is there a file or system for saving these decisions and >> specifying defaults? This (or at least requirements for it) seem like a >> good thing to include in a potential standard. > How the runtime saves security decisions is up to the implementation. Yes, although there are some good reasons for specifying at least some of this. Firstly, it enables portability and interoperability. A Firefox OS Web App package, for example, could be installed on a new device with a different user agent and maintain the same set of security preference as it had originally. Secondly, if a system application is going to be compatible with a wider security policy, and be an equal to a native app, it needs to be controllable and modifiable by the operating system. An operating system which may be distinct from the user agent. Specifying a format for storing security decisions would make this much easier. In webinos we store access control rules and decisions using a set of policy files which are synchronised between the user's different devices. I'm not a big fan of the way we're doing it (XACML) but I think that the principle is sound. > > However note that the intent is that there will be *no* security > decisions to be made at install/update time. This is quite intentional > since we want users to be able to always automatically get updated to > the latest version of an app without having to make security > decisions. Otherwise you'll end up in situations where the user will > have to choose between running an old exploitable version of an app, > or a new version which is asking for a permission that the user > doesn't want to grant. This is a good principle to work with. I think that the act of installing the application is a security decision in itself, but updates should not be held back if at all possible. >> * I think section 8.4.1 is oversimplified. Is there a threat model or a >> risk analysis available from B2G that we should be referring to? > Yes, section 8.4 in general needs more details. We don't have any > specific threat models, but there's some background available at > > https://wiki.mozilla.org/Apps/Security Thanks. >> * Section 8.4.5 - We have a broadly similar CSP policy in webinos, but we >> haven't trialled it extensively with developers yet. Do you have any >> experience with this recommendation so far? > I haven't heard any explicit feedback from 3rd party developers yet. > It was surprisingly easy to apply this policy to the built-in apps > that we are shipping with Firefox OS though. > > The feedback that I've heard from CSP in general is that forbidding > inline script is a rather big hurdle. But it's unfortunately required > in order to not make script injection possible. > > I think the policy isn't a huge deal if the app is built with CSP in > mind from the start. But it can be a lot of work to retrofit an app to > use CSP. Thanks very much - that's helpful to know. Best regards, John
Received on Thursday, 3 January 2013 11:30:36 UTC