Re: Runtime and Security Model for Web Applications

On Thu, Jan 3, 2013 at 3:30 AM, John Lyle <john.lyle@cs.ox.ac.uk> wrote:
> On 03/01/13 02:37, Jonas Sicking wrote:
>>>
>>> * The abstract should probably read "This document specifies the runtime
>>> and
>>> security model of _System Level_ Web Applications".
>>
>>
>>
>> Depends on what you mean by "system level". The intent is that the
>> spec defines the behavior both for built-in applications like the
>> dialer or contacts, as well as third-party apps that can be developed
>> by anyone and installed through a store or through a webbrowser.
>
>
> Hi Jonas,
>
> Thanks for getting back to me so quickly.  I had a few further comments.
> For brevity I've not quoted all of the last email.
>
> I only meant "system level" in the sense that this working group is about
> "system applications" - something _more_ than a normal web page in a
> browser.  There's a shortage of terminology, perhaps.

Yeah, I don't think there's any commonly understood terminology here.
What I've been using lately is simply "web app".

>>> * Is there a concept of a feature or permission that is optionally
>>> required?
>>> I'm thinking of situations where the device is under a corporate policy,
>>> for
>>> example, and a certain feature or permission can't be granted under any
>>> circumstance.  Can an application express that this feature is a deal
>>> breaker, or are all 'required' features actually optional?
>>
>> Simply don't put the feature in "required_features" and do runtime
>> testing to check if the feature is available. For example to test if
>> indexedDB is implemented, do
>>
>> if ("indexedDB" in window) { ... }
>
>
> Ok, so "require_features" really does mean "required".Thanks for clarifying.

Yes. The intent is that a web store can use this field to hide
applications from a user if the application won't work on the user's
device anyway. I.e. there's no point in displaying a
"MyAwesomeShooter" game to a user whose UA doesn't support indexedDB,
if the game doesn't work without indexedDB.

If the game kind'a works without indexedDB, maybe with worse
performance or with reduced functionality, then it can choose to not
list indexedDB in "required_features" and use fallbacks as it deems
appropriate. It's up to the developer to make the call which features
to list in "required_features" and which ones not to.

>>> * Where is the origin of the application defined by the app?  It isn't
>>> listed in the manifest properties, but referenced frequently.  In
>>> particular, section 3.1 lists the 'origin' as being the origin described
>>> in
>>> the manifest, separately from the installOrigin.
>>
>> For hosted applications, it's the origin where the application
>> manifest is located.
>>
>> For packaged apps, it's "app://" plus the unique identifier generated
>> when the packaged app is installed.
>
> Yep, the definition of an origin for an application is clear.  I was
> referring more to section 3.1, where the Application interface offers the
> "origin" attribute which "must return the origin of the application as
> described in the application manifest".  However, the manifest does not
> specify an origin, as far as I can tell.

Ah, yes, that's wrong. It is the origin *of* the manifest. Which is
separate from the installOrigin (which is the origin of the page which
called .install(...)). But it's not information *from* the manifest.

>>> How is the application manifest inside the package
>>> proof that the update is genuine?  Are you assuming that packages are
>>> served
>>> from trusted origins only over TLS/SSL?
>>
>> It's the responsibility of the store to make sure to use safe
>> protocols to deliver he update-manifest and the app package.
>
> But it's presumably the responsibility of the user (or some other device
> stakeholder) to choose legitimate stores.  It seems reasonable, therefore,
> to specify a few default rules on the manner in which update-manifests and
> packages may be served to avoid the user unwittingly using an unreliable and
> insecure app store.

I guess I'm reluctant to specify that manifests have to be loaded over
a secure connection unless we also specify that app resources have to
be loaded over secure connections, both for packaged and hosted apps,
since otherwise we're not actually improving security.

>> We've also been using a signing mechanism to sign app packages, though
>> this isn't yet a generalized mechanism but rather only works for
>> stores which the app runtime has built-in certificates for. The spec
>> mentions this briefly, but more detail is definitely needed. The
>> intent of this signing mechanism is not however to be generic
>> mechanism to ensure that updates are authentic, but rather a mechanism
>> to ensure that applications which have access to sensitive APIs are
>> authentic.
>
> It sounds like the signing mechanism is based on distributor signatures
> rather than author signatures, or are both supported?

Only distributor signatures are supported in Firefox OS right now.
Though note that anyone can be a distributor. So if an author wants to
write a trusted (aka "privileged") app, and they have the ability to
get devices to trust their signatures, then they could still sign
their app with their signature.

> From section 8.4.3, I take it that the UA must maintain an association
> between installation origins and the certificates and keys used by this
> entity to sign app packages.  Is there expected to be any relationship
> between the installation origin's DNS identity, as per the TLS/SSL
> certificate, and the certificate for app signing keys?
>
> To put it another way - are you doing anything similar to the way WAC
> verifies widget signatures:
> http://specs.wacapps.net/core/index.html#digital-signatures ?

I'm not a crypto expert, but as I understand it the answer is "No".

The UA is expected to have a known set of built-in, known trusted,
certificates. For each of those certificates it's expected to have a
list of origins that that certificate is allowed to sign packaged apps
from.

When a packaged app is installed and signed with a certificate that
isn't rooted in one of the built-in certificates, or if the app is
installed from an origin which that certificate is not allowed to sign
apps from, then the app package is rejected and treated as an download
error. Same thing if the signature is completely invalid of course.

Any UA which supports the runtime spec is expected to come with a set
of certificates that the developers of the runtime trusts. Since the
user trusts the runtime enough to run it and provide it with any data
that is handed to any of the applications running in the runtime, we
felt that the user by extension also trust it to make decisions about
which applications to trust.

However runtimes are also encouraged to let the user add/remove
certificates from this built-in set.

>>> * You don't seem to have defined 'hosted' applications, except "not a
>>> packaged application".  Can a system application be a hosted application?
>>> This is a bit unclear.
>>
>> Again, depends on your definition of a "system application".
>
> To try and clarify: a hosted application is just a set of webpages with an
> update-manifest?

update-manifests only exist for packaged apps.

But if a "system application" is simply an application which the user
can install, then yes, hosted applications can be system applications.

> But packaged applications can still load images and other resources, so the
> distinction between packaged and hosted is that a packaged application has
> *some* resources within its container, whereas a hosted application is
> entirely hosted online and has no container.

That is correct.

> The scope of a hosted application is, therefore, whatever pages are hosted
> by the origin and can be navigated to by the user.

Correct.

> It was mentioned that there are security advantages to packaged apps over
> hosted apps.  Is it expected that hosted apps will be given access to fewer
> APIs?

I would not say that there are security advantages to packaged apps
over hosted apps.

But there are security advantages of "trusted" (aka "privileged") apps
over other apps since trusted apps are expected to have been reviewed
in some form by someone that the user directly or indirectly trusts.
And they are running with a CSP policy which ensures that only the
reviewed code executes in contexts that have access to privileged
APIs.

>>> * Permissions can be granted automatically, denied automatically,
>>> granted/denied by the user at install time, or granted/denied by the user
>>> at
>>> runtime.  Is there a file or system for saving these decisions and
>>> specifying defaults?  This (or at least requirements for it) seem like a
>>> good thing to include in a potential standard.
>>
>> How the runtime saves security decisions is up to the implementation.
>
> Yes, although there are some good reasons for specifying at least some of
> this.
>
> Firstly, it enables portability and interoperability.  A Firefox OS Web App
> package, for example, could be installed on a new device with a different
> user agent and maintain the same set of security preference as it had
> originally.

I don't really see how synchronizing the security decisions between
devices affect portability and interoperability.

Consider for example the geolocation API which exists in web browsers
today. All popular browsers which implement this API display a
security dialog to the user and allows the user to say "yes" or "no"
to allowing a specific web page. maps.google.com uses this API and
thus triggers the UI which requires the user to make a security
decision.

maps.google.com works fine across multiple UAs currently. And it
doesn't have to do anything special with regards to the Geolocation
API to make that be the case.

Yet different UAs use completely different methods for saving the
security decisions for the geolocation API. In Firefox we save the
decisions in memory if the user temporarily grants permission to use
the API, and using a SQLite database if the user does so permanently.
In Chrome I think they might use a LevelDB backend. I would expect yet
other browsers to use other mechanisms.

But interoperability doesn't seem to suffer because of this.

> Secondly, if a system application is going to be compatible with a wider
> security policy, and be an equal to a native app, it needs to be
> controllable and modifiable by the operating system. An operating system
> which may be distinct from the user agent. Specifying a format for storing
> security decisions would make this much easier.
>
> In webinos we store access control rules and decisions using a set of policy
> files which are synchronised between the user's different devices. I'm not a
> big fan of the way we're doing it (XACML) but I think that the principle is
> sound.

We can certainly specify synchronization mechanisms for security
decisions between both devices as well as runtimes on the same device.
However I think we should do that as a separate specification in order
to not increase the scope. The runtime and security model spec seems
quite useful without it and I don't see that it harms interoperability
for applications between runtimes.

/ Jonas

Received on Friday, 4 January 2013 04:31:21 UTC