Re: Semantics of HTTPS

On Thu, Sep 13, 2012 at 8:03 AM, Carl Wallace <>wrote:

> On 9/13/12 7:50 AM, "Willy Tarreau" <> wrote:
> >On Thu, Sep 13, 2012 at 08:59:06PM +1000, Mark Nottingham wrote:
> >> We're getting off track here -- this issue is about the semantics of the
> >> HTTPS scheme, in the context of HTTPbis, not potential future work.
> >
> >OK but it was a proposal to address some people's concern that "https"
> >means "end-to-end" to people while currently at more and more places
> >this is not true anymore.
> >
> >So the idea was to address this specific concern (which is a UI concern
> >in my opinion) by proposing a different scheme in the browser.
> >
> >It looks like it's not a good idea in the end considering some of the
> >points that were made.
> >
> >Going back to https, PHK is right that ends should be clearly defined,
> >at least to the user. In my opinion, https could be end-to-end where
> >one end is the local proxy. All we're dealing with is a matter of trust,
> >which is not a technical thing to debate on but a user choice.
> This gets more complicated where mutual auth is employed and the
> destination server does not want to authenticate the proxy, i.e. e2e
> authentication.  It'd be nice to have a means of allowing a client to
> issue a (short lived) proxy certificate to the proxy to use when
> authenticating to the destination, enabling the destination server to
> authenticate the client by checking the last non-proxy certificate in the
> path.


That would not be 'nice' at all.

I think we need to take a step back and talk about requirements here. In
particular, the 'client side proxy' is driven by an intercept requirement.
If people want to have a man in the middle modifying, storing, filtering,
censoring or otherwise involving itself in the protocol that is an
intercept requirement.

Lets not beat about the bush talking about 'proxies'. That only leads to
confusion as we have server side proxies as well as client side and those
have very different semantics and trust issues.

A very common implementation for server side is to have an SSL accelerator
box as a standalone component in front of a server. That has implications
for the protocol as it means that the HTTP layer cannot expect to see any
information from the SSL layer unless we define an explicit mechanism for
carrying the information from one layer to another. But it does not change
the trust model as hitting an SSL accelerator box owned by Google that is
talking to a Web server owned by Google is the same thing as far as
trusting Google goes.

Client proxies are a completely different beast because in the typical case
you have a user of the browser and an operator of the proxy and the two are
different entities and so you have issues of notice, knowledge and informed
consent (all of which are topics people have whole conferences on).

I see the following approaches as possible:

1) Do nothing

Ignore the issue completely, let parties work around the infrastructure as
deployed without making allowance for the requirement.

2) Attempt to disrupt

Attempt to design the protocol so that the intercept requirement cannot be
met in any circumstance. This is actually the traditional IETF approach.
The risk here is that people create their own work arounds and loopholes
and then demand that infrastructure supports it and the systems that result
do not provide for informed consent.

3) Provide a comprehensive mechanism that is conditioned on informed

If we decide that preventing intercept is possible the next best outcome is
to enable intercept but only with informed consent.

If we want to support that particular use case we have to have the client
delegate its whole process of trust evaluation to the accepted interceptor
and at minimum constantly inform the user that this has occurred. We may
also decide that we want to inform the service that the user is connecting
to that an intercept has occurred.


Received on Thursday, 13 September 2012 13:47:37 UTC