W3C home > Mailing lists > Public > public-webappsec@w3.org > May 2014

SRI, cache validation and ServiceWorkers

From: Yoav Weiss <yoav@yoav.ws>
Date: Mon, 19 May 2014 09:12:28 +0200
Message-ID: <CACj=BEiYJz+cjthp_tzj7knzeucaeGK3ugCK4AHeRscewtThYw@mail.gmail.com>
To: "public-webappsec@w3.org" <public-webappsec@w3.org>
Summary:
--------------
* A hash based validator header (i.e. If-hash-mismatch) can be used to
improve security in cases of MITMed temporary network and cacheable
resources.
* They can also be used to securely deploy service workers without TLS.

The full story:
--------------------
With Sub-resource integrity used for resource caching, I've been thinking
of some benefits that browsers can get from using it (and hash values in
general) as a cache validator as well.

My initial thoughts revolved around the fact that browsers can use an
integrity hash based validator for stale cached resources in cases where
the server haven't sent out Last-Modified or Etag response headers (which
current cache validators require).
That can enable intermediate caches to serve a 304 response without the
resource body in these cases, and provide major savings.
Such a validator header is not necessarily tied to SRI. Browsers can
provide it according to the cached resource body, regardless of the
integrity attribute.

Thinking about it some more, I saw value in such a validator beyond cache
optimizations.

Browsers can use a hash based validator in order to verify that certain
(cache fresh) resources are in fact the resources that the origin server
intended to send, rather than MITMed resources.
That is in order to mitigate the HTTP cache vulnerabilities that were
discussed in relation to Service
Workers<https://github.com/slightlyoff/ServiceWorker/issues/199#issuecomment-38273273>.
The browser would be able to consider resources cached on a "suspicious"
network, as "suspicious", and revalidate them once it's on a
"not-so-suspicious" network, without re-downloading them (assuming they
were not MITMed).
Browsers can use various heuristics as to which networks are considered
"suspicious" and which aren't. (I'm assuming browsers can access data such
as current SSID in order to run such heuristics. I'm not sure it's a safe
assumption to make)

Which brought me to TLS-free Service Workers.

The same revalidation scheme can be used for the Service Worker script once
the user switches networks, and "disable" the script if the response is not
a 304, or a different, valid script. By disabling I mean that the script
will not be evicted from the cache, but will not get used either. If the
script is disabled by something that the browser suspects is a captive
portal, the browser can retry the validation once connectivity to the
outside world was reestablished. The SW script will get re-enabled only if
a 304 response is sent in reply to a validation attempt.

Now, all that's left to define is what is a "valid script". If we can
assume that captive portals always respond with a 302 response, then every
200 response can be considered a valid script. Otherwise, we can do that by
adding an HTTP response header that basically says "I am a service worker".
That would mean that if you'd want to deploy TLS-free SW, you'd have to
fiddle with your server config, and add "I'm a service worker" headers.
That's not ideal, but it's easier than TLS, so it's likely to increase
adoption..

So, going over the possible scenarios:
* A site with a service worker is MITMed - in this scenario, the MITM can
replace the SW for the site, as long as the user is on the hostile network.
Once the user gets on a safe network, revalidation would result in the
original SW taking over.
* A site with no SW is MITMed - the MITM adds a SW. Once the user gets on a
safe network, the SW gets revalidated. Since the response to the
revalidation is a 404, the hostile SW stays in cache, but it is disabled.
Unless the user gets back to the MITMed network, the SW will remain
disabled forever.
* A site with a service worker is in a captive portal - The site's SW gets
revalidated, and the response is a 404/302 (assuming captive portals don't
return 200 responses). The response is not a valid SW script, and
therefore, the SW is disabled. A few minutes later, the user has entered
their credentials, and can reach the Internets. The browser then
revalidates the SW script again, and gets a 304 response. The SW is then
enabled. That means that the offline experience may get interrupted for a
few moments under captive portals (possibly only if the SW registration was
done on a network that the browser deems "suspicious"), but it will get
resumed once the browser can get through to validate the SW.

Obviously, full TLS provide better user protection (for any kind of MITM),
but I think the above scheme can be used to mitigate SW specific MITM
threats, and enable SW over TLS.

Thoughts?
Yoav
Received on Monday, 19 May 2014 07:13:09 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:05 UTC