Re: Documenting Timing Attacks in Rendering Engines

Hi Charles,

Thanks for your thoughts on this. Indeed, we (at least Dean and I) believe the generic issue is exactly as you describe. Generally speaking, as soon as the renderer processes content with a different time pattern depending on the content nature, then a timing attack is possible. Or course, things like filters or shaders which can influence the rendering time make things worse.

I also agree that measuring the bandwidth of the time channel and demonstrating it would be good and interesting. But even if we found the weakness to be small, I think that is still a concern.

My take on the issue is that we should match the shader origin with the shaded content's origin and apply restrictions there. Then, we could allow CORS to lift those restrictions.

For the record, here are the points we presented the FX group during the last face to face:

- Timing attackes rely on inferring rendered content from the time it takes to render it
- Timing attacks were demonstrated attack in WebGL
- There are differences between CSS shaders and WebGL (different timing mechanisms)
- Possible solution:
     - CORS
     - Mandate that UAs do not give out information on rendered content from timing (obfuscate the requestAnimationFrame method)

We decided to explore CORS at this time,

From: Charles Pritchard <<>>
Date: Thu, 8 Dec 2011 23:36:43 -0800
To: "<>" <<>>
Subject: Documenting Timing Attacks in Rendering Engines

It's a good time to discuss timing attacks in rendering engines.

TL;DR: When an implementation executes more quickly or more slowly
depending on underlying content, it is vulnerable to timing attacks.
That's not alarming or out of the ordinary, but it is important to
understand what the boundaries are for those windows of opportunity.


It seems a good time, in this quiet time in the year to discuss
information exposure through timing attacks on rendering engines.

In brief: when a function requires less-or-more time, depending on its
input, it will result in a delay in the event loop. If that delay can be
timed, then there are hints exposed about the underlying content. If I
have five minutes of collecting hints about whether portions of an image
are light or dark, I can improve my resolution to the point that I might
have something useful, as a malicious attacker.

While we work toward making things better for the developer, we can also
work toward defining the bounds, scientifically, in engineering units,
of what timing attacks exist in the web today. My hope is to determine
non-controversial baselines for the current state of browser security.
It's common in the field of cryptography to describe attacks in
mathematical terms, such as 2^32, 2^128. I'm not asking for such
precision in the first round, but I want to say it's common.

Here are is an example of a web api report using simple math in the
context of a timing attack: WebSockets and the BEAST vulnerability in
SSL, with an added 2^32 level of security for masking:

These kind of cryptographic figures are great. There are also more human
metrics. Consider the following report on the likelihood of a user to
remain on a site which is operating a brute-force attack.

The first attack documents a means in which an author could push cookie
data across a particular frame length to easier decrypt data. A
counter-measure involves bit-masking, adding a 2^32 level of a security
regardless of other counter-meassures. The second attack documents a
creative DNS-based attack, in which the browser may cache resources from
one server, and then subsequently fetch resources from another within
the span of minutes. As DNS resolution and HTTP are often decoupled,
this change in destinations may not exposed between the layers.

They are great examples, and we can blame Adam Barth for facilitating them.

There's a very simple cache-based timing attack circulating the news
reports of popular programming blogs and meta-blogs in which an attacker
simply determines how quickly the browser fetches resources from
servers. If you've visited and cached content from a particular server,
it's likely you've visited that site recently.

These are not alarming, new, or particularly interesting issues. But
they are issues. Timing attacks were an issue with the introduction of
WebGL, in which an author might access content across domains that they
otherwise shouldn't.

Recently, some webkit developers brought up a proposal for CSS-based
shaders. These would significantly cut down the cost (in lines of
JavaScript) of developing simple pixel shaders for image [and other]
content. Quickly, other developers reminded hopefuls about the hard
lessons learned in WebGL's introduction to the web. WebGL initially
allowed cross-domain images to be used without much restriction. Now,
WebGL relies on CORS. The same will essentially be applied to CSS
shaders in their nascent form.

Relevant to FX is the class of attacks which are being discussed: timing
attacks on rendering engines. What are the current bounds of timing
attacks on various procedures? SVG filters may be optimized depending on
context, and may introduce slight differences in timing. There's nothing
wrong with doing so. But, if that timing can be exploited so it is a
several magnitude slower on conditions, then we are reaching into a
practical attack.

And so I've asked: "Is there reasonable support for investigating timing
issues inside of  the browser? We have baseline issues..."

This is a simple engineering problem and can be handled by a class of
test cases based on setTimeout and/or requestAnimationFrame.

Some context:

To summarize the thread: Gosh it'd be great to know what the numbers
are... does anyone know the numbers?

This is an interesting class of problem, in some sense requiring that
test-designers make benchmarks for the sake of security, in addition to
their usual purpose of testing for regression and improvement. I like it.


Received on Friday, 9 December 2011 23:45:35 UTC