W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2009

Re: Questions about draft-abarth-mime-sniff-00

From: Adam Barth <w3c@adambarth.com>
Date: Mon, 6 Apr 2009 17:07:16 -0700
Message-ID: <7789133a0904061707o5df2a47p8a0c48c82145c387@mail.gmail.com>
To: Adrien de Croy <adrien@qbik.com>
Cc: Michaeljohn Clement <mj@mjclement.com>, Daniel Stenberg <daniel@haxx.se>, HTTP Working Group <ietf-http-wg@w3.org>
On Mon, Apr 6, 2009 at 4:41 PM, Adrien de Croy <adrien@qbik.com> wrote:
> sure that's the goal.  But what if you get the algorithm wrong?  It's still
> humans designing this right?  If there is an exploit to the algorithm, then
> potentially any browser that uses it is vulnerable.

What if we get the gzip algorithm wrong?  Maybe we should have
implementations use different compression algorithms just in case
there is an "exploit in the algorithm?"

> It's difficult to foresee the future.  It's also difficult to guarantee that
> the algorithm will be bullet-proof forever and withstand any attack.
>
> The potential down-side if all browsers are found to have a vulnerability is
> difficult to estimate.  It could be enormous.

The security cost of having different user agents use different
algorithms is MUCH greater.  Even mismatches of a single byte can lead
to cracks that an attacker can exploit.  I have previously given an
example of just such a vulnerability in a mismatch between one of the
world's most popular servers and one of the worlds most popular
clients.

I feel like I'm just repeating the argument I made previously.  Do you
have anything more specific to contribute to this discussion beyond a
vague analogy to biological systems?

Adam
Received on Tuesday, 7 April 2009 00:08:07 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:02 GMT