W3C home > Mailing lists > Public > public-fx@w3.org > October to December 2011

Documenting Timing Attacks in Rendering Engines

From: Charles Pritchard <chuck@jumis.com>
Date: Thu, 08 Dec 2011 23:36:43 -0800
Message-ID: <4EE1BA8B.5030206@jumis.com>
To: "public-fx@w3.org" <public-fx@w3.org>
It's a good time to discuss timing attacks in rendering engines.

TL;DR: When an implementation executes more quickly or more slowly 
depending on underlying content, it is vulnerable to timing attacks. 
That's not alarming or out of the ordinary, but it is important to 
understand what the boundaries are for those windows of opportunity.

...


It seems a good time, in this quiet time in the year to discuss 
information exposure through timing attacks on rendering engines.

In brief: when a function requires less-or-more time, depending on its 
input, it will result in a delay in the event loop. If that delay can be 
timed, then there are hints exposed about the underlying content. If I 
have five minutes of collecting hints about whether portions of an image 
are light or dark, I can improve my resolution to the point that I might 
have something useful, as a malicious attacker.

While we work toward making things better for the developer, we can also 
work toward defining the bounds, scientifically, in engineering units, 
of what timing attacks exist in the web today. My hope is to determine 
non-controversial baselines for the current state of browser security. 
It's common in the field of cryptography to describe attacks in 
mathematical terms, such as 2^32, 2^128. I'm not asking for such 
precision in the first round, but I want to say it's common.

Here are is an example of a web api report using simple math in the 
context of a timing attack: WebSockets and the BEAST vulnerability in 
SSL, with an added 2^32 level of security for masking: 
http://www.educatedguesswork.org/2011/09/security_impact_of_the_rizzodu.html

These kind of cryptographic figures are great. There are also more human 
metrics. Consider the following report on the likelihood of a user to 
remain on a site which is operating a brute-force attack. 
http://www.adambarth.com/papers/2007/jackson-barth-bortz-shao-boneh.pdf

The first attack documents a means in which an author could push cookie 
data across a particular frame length to easier decrypt data. A 
counter-measure involves bit-masking, adding a 2^32 level of a security 
regardless of other counter-meassures. The second attack documents a 
creative DNS-based attack, in which the browser may cache resources from 
one server, and then subsequently fetch resources from another within 
the span of minutes. As DNS resolution and HTTP are often decoupled, 
this change in destinations may not exposed between the layers.

They are great examples, and we can blame Adam Barth for facilitating them.

There's a very simple cache-based timing attack circulating the news 
reports of popular programming blogs and meta-blogs in which an attacker 
simply determines how quickly the browser fetches resources from 
servers. If you've visited and cached content from a particular server, 
it's likely you've visited that site recently.

These are not alarming, new, or particularly interesting issues. But 
they are issues. Timing attacks were an issue with the introduction of 
WebGL, in which an author might access content across domains that they 
otherwise shouldn't.


Recently, some webkit developers brought up a proposal for CSS-based 
shaders. These would significantly cut down the cost (in lines of 
JavaScript) of developing simple pixel shaders for image [and other] 
content. Quickly, other developers reminded hopefuls about the hard 
lessons learned in WebGL's introduction to the web. WebGL initially 
allowed cross-domain images to be used without much restriction. Now, 
WebGL relies on CORS. The same will essentially be applied to CSS 
shaders in their nascent form.

Relevant to FX is the class of attacks which are being discussed: timing 
attacks on rendering engines. What are the current bounds of timing 
attacks on various procedures? SVG filters may be optimized depending on 
context, and may introduce slight differences in timing. There's nothing 
wrong with doing so. But, if that timing can be exploited so it is a 
several magnitude slower on conditions, then we are reaching into a 
practical attack.

And so I've asked: "Is there reasonable support for investigating timing 
issues inside of  the browser? We have baseline issues..."

This is a simple engineering problem and can be handled by a class of 
test cases based on setTimeout and/or requestAnimationFrame.


Some context:

http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html
https://lists.webkit.org/pipermail/webkit-dev/2011-December/018763.html
https://lists.webkit.org/pipermail/webkit-dev/2011-December/018861.html
https://lists.webkit.org/pipermail/webkit-dev/2011-December/018864.html

To summarize the thread: Gosh it'd be great to know what the numbers 
are... does anyone know the numbers?



This is an interesting class of problem, in some sense requiring that 
test-designers make benchmarks for the sake of security, in addition to 
their usual purpose of testing for regression and improvement. I like it.


-Charles
Received on Friday, 9 December 2011 07:37:09 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 9 December 2011 07:37:14 GMT