Re: HTTPS, proxy environment variables and non-CONNECT access

On 16 July 2013 20:35, Nicolas Mailhot <nicolas.mailhot@laposte.net> wrote:
>
> Le Mar 16 juillet 2013 08:08, Robert Collins a écrit :
>> So [fairly recently] squid and other proxies can retrieve resources
>> over HTTPS. However user agents generally don't take advantage of
>> this, instead using CONNECT, to do end to end encryption.
>
> Is there a spec somewhere on how it's supposed to work ?

Just HTTP/1.1 - http://www.apps.ietf.org/rfc/rfc2616.html#sec-5.1.2
-  The absoluteURI form is REQUIRED when the request is being made to
a proxy. The proxy is requested to forward the request or service it
from a valid cache, and return the response. Note that the proxy MAY
forward the request on to another proxy or directly to the server

ask for an HTTPS URI and the proxy should get the HTTPS URI for you;
early HTTP proxies couldn't make requests to HTTPS sites, nor could
they listen on HTTPS, both issues have been fixed some time ago.

>> I'm sure that implementing this will start to raise issues like 'how
>> do we signal client certificates indirectly' and so on, which *will*
>> be HTTP protocol issues, but one step at a time.
>
> Some more questions:
> 1. How do you protect the client <-> proxy link?

It's up to the client I think. Some obvious options:
 - use an HTTPS connection to the proxy.
 - use mandatory IPSEC within your network
 - have the proxy be on your local machine

> 2. how do you send auth from the client to the proxy in a secure way
> without it leaking them outside?

I think you mean 'If the origin is an HTTPS origin which uses
replayable (e.g. basic) auth, how do you prevent that leaking [vs e.g.
how do you authenticate to the proxy itself]. So for that you'd want
to have the client switch to CONNECT if the client -> proxy link is
subject to observation and any of the following are true:
 - there are HTTPS only cookies
 - it wants to try a replayable auth mechanism

But the response might deliver an HTTPS only cookie, which suggests
that we should mandate only doing this when either a) the client
doesn't care about having some traffic observed or b) the client ->
proxy link is secured (either at transport level of IPSEC etc).

> (some http_proxy users just add proxy
> auth headers everywhere even when the proxy didn't ask for them, in basic
> auth, so they are leaking secrets to the outside like sieves)

Basic auth is terrible in so many ways :).

> 3. more generally how are the client and proxy supposed to distinguish
> between client <-> proxy signaling and client <-> web site signaling ?

Thats covered by HTTP already.

> 4. Is proxy chaining possible? (I've seen proxy used both to authorize
> connexions to the outside, and as gateway for connexions inside. So how
> can a poor user that needs access to a resource protected by and
> Internet-to-inside proxy traverse his own inside-to-Internet gateway to
> reach it?)

There's no facility for tunnelling *auth* from a client through
multiple proxies in HTTP today. It might be interesting to contemplate
but I think it's orthogonal.

Remember that I proposing an optional thing folk can opt into, when it
makes sense for their environment.

For instance - I have a proxy on my laptop, traffic to it is entire
unobservable by other people, and it would make an excellent shared
cache for the growing number of sites (e.g. pypi) that are https-only.

-Rob

Received on Tuesday, 16 July 2013 09:53:04 UTC