Re: Proposal: "private" execution mode

Hi Eric,

Replies inline.

On 28/11/2013 2:22 PM, Eric Rescorla wrote:
> Oops. Somehow pushed send too early.
>
> Correct version:
> I don't believe that it is practical to build a system which provides the
> desired security properties because it is generally so hard to prevent
> information leakage. (I note that you don't describe how the browser
> would do so.)

I thought I did...? Any method that could be used to leak data (e.g. 
XMLHttpRequest.send()) would throw a permission denied error when 
invoked in "private" mode.

> As an example, let's say that there are a number of common resolution
> sets and the attacker wishes to know whether the browser supports
> each one. And further suppose there's only one device.
>
> Now consider the following filter function:
>
> R = [[1024, 768], [640, 480], ...];
>
> function filter(candidateDevice) {
>    mask = 0;
>
>    var resolutions = candidateDevice.getResolutions();
>
>    for (var i=0; i<R.length; i++) {
>      if (resolutions.indexOf(R[i]) != -1) {
>        mask |= 1 << i;
>      }
>    }
>
>    for (var j=0; j<mask; j++) {
>       sha1("long string.....");
>     }
>
>    return false;
> }
>
> Now, I haven't leaked any data directly, but I've just created a timing
> channel that returns the resolution profile. Obviously, this would be harder
> if there was >1 device, but there are ways to deal with that as well.

 1. Are there well-understood ways of tackling timing channels? Google
    certainly brings up many whitepapers on the topic. I am no expert,
    but couldn't the browser reduce the effectiveness of this technique
    using setTimeout(onSuccess, random) instead of invoking onSuccess
    immediately?
 2. We could look at further restricting the calls that would be legal
    within "private" mode, though obviously I'd prefer to avoid this if
    possible.

> (Note that I don't think it's practical to have the filter just applied to
> each device individually because then I can't sort which I want to do)

Agreed, which is why I wrote that the provided example is not part of 
the proposal. I'm sure we can do better.

Gili

>
> -Ekr
>
>
> On Thu, Nov 28, 2013 at 11:16 AM, Eric Rescorla <ekr@rtfm.com> wrote:
>> On Thu, Nov 28, 2013 at 9:27 AM, cowwoc <cowwoc@bbs.darktech.org> wrote:
>>> Hi,
>>>
>>> I'd like to propose a high-level mechanism for dealing with fingerprinting
>>> risks. Borrowing from HTTP caching terminology, I propose declaring a new
>>> Javascript execution mode called "private". Code executed in "private" mode
>>> would be required to adhere to the following restrictions:
>>>
>>> Any written data would be marked as "private". Data marked as "private" may
>>> only be accessed under "private" mode. In other words, privacy is
>>> contagious.
>>> Sensitive methods that may be used to leak data outside the UA (e.g.
>>> outgoing network requests) MUST throw a permission denied error.
>>>
>>> Here is a concrete example of how this may be used:
>>>
>>> A user invokes getUserMedia(filter, onSuccess) where filter and onSuccess
>>> would be supplied by the user. The user invokes the function "normal" mode,
>>> but filter gets invoked in "private" mode. Here is a sample filter:
>>>
>>> function filter(candidateDevice)
>>> {
>>>    var resolutions = candidateDevice.getResolutions();
>>>    var idealResolution = {1280, 720};
>>>    return resolutions.indexOf(idealResolution)!=-1;
>>> }
>>>
>>> In the above function, candidateDevice is marked as "private" by the browser
>>> before passing it into the function. WebRTC would invoke onSuccess in
>>> "normal" mode, passing it the first device accepted by the filter. NOTE: the
>>> above definition of getUserMedia() is just an example and is not part of
>>> this proposal.
>>>
>>> There are many ways a browser could implement this proposal. It could mark
>>> data using a sticky "private" bit (as mentioned above). It could "validate"
>>> user functions before invoking them, and throw a permission denied error if
>>> they leak data. Any implementation that prevents "private" data from leaking
>>> is deemed to be compliant.
>>>
>>> While this discussion used the getUserMedia() as an example, I believe that
>>> this mechanism could be used to tackle fingerprinting risks across the
>>> entire WebRTC surface. Unlike other proposals, I believe it does so without
>>> compromising the usability of the WebRTC API and user interface.
>>>
>>> Let me know what you think.
>> I don't believe that it is practical to build a system which provides the
>> desired security properties because it is generally so hard to prevent
>> information leakage. (I note that you don't describe how the browser
>> would do so.)
>>
>> As an example, let's say that there are a number of common resolution
>> sets and the attacker wishes to know whether the browser supports
>> each one. Now consider the following filter function:
>>
>> R = [[1024, 768], [640, 480], ...];
>>
>> function filter(candidateDevice) {
>>    mask = 0;
>>
>>    var resolutions = candidateDevice.getResolutions();
>>
>>
>>
>>
>> }

Received on Thursday, 28 November 2013 19:45:42 UTC