W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: Re[4]: Some reasons why mandating use ofSSL for HTTP is a really bad idea

From: Phillip Hallam-Baker <hallam@gmail.com>
Date: Wed, 18 Jul 2012 08:08:51 -0400
Message-ID: <CAMm+LwhjQK2nBovFraadk6L=j9MDbYuKS9Ou=NZ6GsQSUjsBTQ@mail.gmail.com>
To: Poul-Henning Kamp <phk@phk.freebsd.dk>
Cc: Mike Belshe <mike@belshe.com>, "Adrien W. de Croy" <adrien@qbik.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
On Wed, Jul 18, 2012 at 4:02 AM, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote:
> In message <CABaLYCuyjE8TRHHk2O0sY4ocB=LmK7S6LrDYc1=Jz3NFU1df+w@mail.gmail.com>
> , Mike Belshe writes:
>>I'm taking the position of the user.  I want the user to be able to know if
>>someone is spying on them in all cases.
> This has been a long thread already, so I will try to make this short
> and to the point.
> 1. I think Mike has a very valid point here, but it's not our job,
>    and we could not solve the problem if it were.
>    I would express it slightly differently than Mike: "If it looks
>    like privacy, the user should be able to know definitively if
>    it is end-to-end privacy or not."
>    I'm not a card-carrying cryptographer, but it's my clear perception
>    that the currently perverted and corrupt CA system deliberately
>    makes that ideal very hard if not impossible.

You state that you have no expertise in the area and then opine that a
group of people are corrupt.

That is me you are talking about, I have been in Web security longer
than anyone else.

The IETF has proposed many security solutions over the years. HTTPS is
the only one that has achieved ubiquity. Every one of the alternatives
to CAs has been tried multiple times with nothing like the success.

The problem of SSL interception proxies is a lot more complex than
people here seem to imagine. There are industries that are required to
keep a log of ALL communications, brokerage firms for example.

Until recently nobody in the CA or browser world had much sympathy for
the problem and it was largely ignored. The companies involved found
their own solution to the problem, adding an extra root into the trust

When Syria recently attempted a MITM interception on FB, Google and
others they just used a self signed root and hoped people would add it
to the trust store. It probably worked to the extent they had hoped
even if it was discovered in a couple of days.

There is a long discussion on the Mozilla list about whether we should
address this problem with prohibition or to legalize the brothel so we
can ensure that there are appropriate safeguards. That is primarily a
policy argument and neither choice is perfect.

> 2. There are legitimate (or legitimized) reasons for intercept.
>    In the cases where the intercept is not covert, it is in the
>    interest also of the interceptor, that no unnecessary weaknesses
>    are introduced.
>    This indicates an operating mode where the user is told "You
>    have privacy only as far as SNOOP_PROXY, what happens after that
>    depends on what SNOOP_PROXY does."
>    Or the probably more likely case: "You have no privacy from here
>    to CORP_PROXY, but CORP_PROXY claims that you have privacy from
>    there to the destination."
>    If I'm an employee, or inmate, that's just the rules of the game,
>    but the important thing is that as a user I'm precisely and
>    correctly communicated the reality of the game.
>    I belive this is more or less just a matter of allowing
>         GET https://blablabla HTTP/1.1
>    to proxies, but I defer fully to Adrian on this.

I would prefer to keep TLS and HTTP the same and move the PKIX
processing out of the client platform completely. This is how it is
done in government systems, the client uses SCVP.

> 3. There are objective indications that TLS is not always an advantage.
>    Some have been mentioned already, I'll just add another:  High
>    reliability safety-of-life critical systems, such as power distribution,
>    air traffic control and emergency services.
>    Putting a 2 meter airgap around such a system is currently
>    considered the best practice way to implement both high reliability
>    and high security.
>    The added failure modes of certificates, which are opaque and
>    hard/impossible to debug & correct on the kind of timescales
>    relevant (< 3 minutes) are not welcome.  SSH is tolerated, but
>    with very strict password registration and disclosure policies,
>    and often with NULL block-ciphers, to make network flight recorders
>    usable.
>    Mandating TLS in HTTP/2.0 will effectively force these installations
>    to stick with HTTP/1.1, in order to be able to do accident
>    investigations.


> 4. It is not our job, and it is counter productive.
>    Cryptography is a layer 8-10 problem more than anything on the
>    internet.
>    If we tangle HTTP/2.0 into layers 8-10, it will become subject
>    for discussion at diplomatic levels, will raise issues about
>    ITAR rules and will meet a lot of push-back from all sorts of
>    shady agendas, not to mention a damn good headline in a clueless
>    press.
>    Trying to mandate TLS with HTTP/2.0 would therefore be a major
>    drag on HTTP/2.0's adoption and deployment and  counterproductively
>    delay the benefits and improvements HTTP/2.0 (will) bring.
> Conclusion:
>    Make it easy to do, but don't mandate it.
>    You always get better results by delivering good tools, than
>    rigid policies.


Website: http://hallambaker.com/
Received on Wednesday, 18 July 2012 12:09:23 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:04 UTC