W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2013

Re: What will incentivize deployment of explicit proxies?

From: Adrien de Croy <adrien@qbik.com>
Date: Thu, 12 Dec 2013 10:00:54 +0000
To: "Yoav Nir" <synp71@live.com>
Cc: "HTTP Working Group" <ietf-http-wg@w3.org>
Message-Id: <em606be600-cd7d-45b6-b8f8-e572ffbcb94a@bodybag>


------ Original Message ------
From: "Yoav Nir" <synp71@live.com>
>
> > And also for the record.
> >
> > Most of my customers would have a big problem with the proposal that
> > connections to the ("trusted") proxy should be over TLS.
>
>It's a must for a decrypting proxy that will do "GET https://";  less 
>so for a proxy that does "CONNECT".
but when a client makes a connection to the proxy for a GET http:// 
would that also be over a TLS connection?

This is a large proportion of surfing currently.  To move this to TLS 
(even if only TLS to proxy) will be a considerable increase in load.

I agree difference between TLS to proxy + GET https:// vs MitM is not 
much (cert is cached after generation anyway).  But currently https 
traffic is only a fraction of total traffic (albeit increasing thanks to 
FB and Google).

>
> > For many of them the proxy is already working the hardware quite hard
> > (either old hardware or high-end). To reduce capacity by 75% or more
> > just by making everything TLS would mean they would all need to go 
>get
> > new or extra hardware for their proxy. I foresee a lot of resistance 
>to
> > this.
>
>Capacity reduction depends on what the proxy is doing. Malware scanning 
>is so onerous, that the TLS part will be lost in the noise, because 
>handshakes will be rare. Caching or simple URL filetering is lighter, 
>so the capacity may be reduced by as much as you say.
Actually you'd be surprised about load from malware scanning as well.  
With normal content-type based whitelisting policy, only a fraction of 
content is actually scanned.

Basic testing when we put in an https reverse proxy showed an encrypted 
connection took considerably more CPU than a plaintext one.  Probably 
10x as much.  So my 75% was optimistic.

>
>MitM proxies already do that much work, except that they also sign fake 
>certificates. Their load might even decrease.
Agree for current https, but not for current http.

>
> > I don't see why the client needs to auth the proxy inside a private
> > network.
>
>Because people bring all kinds of stuff to the private network. That's 
>a direct result of making computers smaller than this:
>http://www.tcf.ua.edu/Classes/Jbutler/T389/RailroadComputer1967.jpg
>
>If someone (or some bot) fools your computer to use it as a proxy, it 
>gets access to all your HTTP content, and all your HTTPS meta-data, 
>even without being a decrypting proxy. That is why authentication is 
>needed.
We have that problem now then for http, why is noone doing anything 
about it?

I think such problems deserve a solution outside http.

Adrien

>
>Yoav
>
Received on Thursday, 12 December 2013 10:01:03 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:20 UTC