W3C home > Mailing lists > Public > whatwg@whatwg.org > July 2009

[whatwg] Security risks of persistent background content (Re: Installed Apps)

From: Drew Wilson <atwilson@google.com>
Date: Wed, 29 Jul 2009 10:39:49 -0700
Message-ID: <f965ae410907291039y2ec4bad3u7a5d5ca793ba2ad1@mail.gmail.com>
Maciej, thanks for sending this out. These are great points - I have a few
responses below. The main thrust of your argument seems to be that allowing
web applications to run persistently opens us up to some of the same
vulnerabilities that native (desktop and mobile) apps have, and I agree with
that. The question (as with native apps) is whether we can mitigate those
vulnerabilities, and whether the functionality that persistence provides is
worth the larger attack surface.
On Tue, Jul 28, 2009 at 10:58 PM, Maciej Stachowiak <mjs at apple.com> wrote:

>
> On Jul 28, 2009, at 10:01 AM, Drew Wilson wrote:
>
>  I've been kicking around some ideas in this area. One thing you could do
>> with persistent workers is restrict network access to the domain of that
>> worker if you were concerned about botnets. That doesn't address the "I
>> installed something in my browser and now it's constantly sucking up my CPU"
>> issue, but that makes us no different than Flash :-P
>>
>
> Here's some security risks I've thought about, for persistent workers and
> persistent background pages:
>
> 1) If they have general-purpose network access, they are a tool to build a
> DDOS botnet, or a botnet to launch attacks against vulnerable servers.


Indeed. There are mitigations against this (basically, leveraging some of
the same infrastructure we have in place to warn users of malware), although
not all browsers have this protection currently. But, yes, this
(intentionally) makes the browser more similar to the desktop environment,
and so more vulnerable to desktop-style attacks.


>
> 2) If they do not have general-purpose network access, this can be worked
> around with DNS rebinding. Note that ordinarily, DNS rebinding is only
> considered a risk for content protect by network position. But in the case
> of a DDOS or attempt to hunt for server vulnerabilities, this doesn't matter
> - the attack doesn't depend on the DDOS node sending credentials.


That's an interesting point. Basically, once I've gotten a farm of people to
install persistent workers, I can just rebind my domain to any arbitrary IP
address, and now that domain could get a flood of HTTP connections.


>
> 3) If they have notification capabilities, they can be used for advertising
> spam.


Yes, although the point of notifications is that 1) they are opt-in and 2)
they are easy to opt-out (there's a "block" button on the notification). So
I don't know that this is a real issue - the point of notifications is that
it's really easy to undo your decision to grant access. I'd say that rather
than this being a security issue, it's a UX issue to make sure that users
have a way to get rid of annoying notifications easily and permanently.


>
> 4) If they have general network access only while a page from the same
> domain is displayed, then they can use a misleading notification to trick
> the user into going to a page on that domain, to gain network general
> network access at the moment it's needed.


Good point, although I don't think this would be an acceptable restriction
anyway. One of the whole points behind persistent workers is that they can
keep a local data cache up-to-date (i.e. "list of upcoming calendar events")
regardless of whether a page is open.


>
> 5) Even if they only have same-domain network access, they can be used to
> create a botnet for computation - for example for purposes like distributed
> password cracking.
>

Agreed. Once you have your software running on many machines, there are many
things you could do with those cycles. Attackers probably won't be folding
proteins :)


>
> 6) They can be used to greatly extend the window of vulnerability from
> visiting a malicious site once. Consider the model where a browser patches a
> security vulnerability, and users apply the patch over some period after
> it's released. Assuming the vulnerability wasn't already known to attackers,
> users are at risk if they visit a malicious site in the period between
> release of the patch and install of the patch. But with persistent workers
> (or background pages) in the picture, users can be vulnerable if they have
> *every* visited a malicious site - because it could have installed a
> persistent worker that periodically "phones home" for exploit code to try.
> This can greatly increase the number of people who can be affected by a
> malicious web page, and therefore greatly increases the incentive to try
> such a thing. This works even with just same-doman network access. I think
> this risk is really serious because it makes every future browser
> vulnerability much more dangerous.


Agreed that this is a big deal, and is a problem I hadn't considered
previously. I would assume that browser malware detection would blacklist
these sites, but I hate to lean on some magical malware detection
infrastructure too heavily. This seems like an issue that Apple and
Microsoft have dealt with for years in their OS offerings - how do they
handle this?


>
> This list isn't necessarily exhaustive, I'm sure there's more risks I
> haven't thought of, but note that most of these problems are not resolved by
> limiting networking to same-domain.


And as you say, since the worker author assumedly controls the domain DNS, a
"same-domain" restriction is pretty meaningless. Clearly not a
well-thought-out suggestion on my part.


>
> I don't think a permissions dialog could possibly adequately explain these
> risks, and in any case many users blindly click through alert dialogs. The
> risks are subtle but nonetheless outside user expectations for a web
> application.
>

Yeah, I'm not sure whether permissions dialogs are the right solution here.
I do think it's a UX challenge to accurately portray what's happening - it
may run counter to the expectations of some users. I do find it interesting
that we allow users to install and run native applications with at most one
or two warning dialogs (which can often be disabled) but feel that these
same users can't be trusted to make this decision when the content of that
application is javascript instead of x86 binary.


>
> I do think offering a feature like this in the context of an application or
> extension style install experience might be acceptable - specifically an
> experience that is explicitly initiated by the user with multiple
> affirmative steps. But web features are not usually designed around such an
> expectation, usually this is the hallmark of a proprietary platform, at
> times also including central vetting and revocation capabilities.
>

That's another option - using extensions to enable this, although it's
somewhat heavyweight for someone who just wants to get google calendar event
notifications, and doesn't extend cross-browser. I'm starting to think more
about a user-initiated install process, to help mitigate some of the social
engineering attacks by an application-initiated "click here to install
me"-type process.

Agreed that my reference to malware detection is essentially your "central
vetting and revocation capabilities".


>
> Regards,
> Maciej
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090729/d393de31/attachment-0001.htm>
Received on Wednesday, 29 July 2009 10:39:49 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:14 UTC