W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2013

Re: What will incentivize deployment of explicit proxies?

From: Yoav Nir <synp71@live.com>
Date: Thu, 12 Dec 2013 13:45:51 +0200
Message-ID: <BLU0-SMTP441A036D0A27379ECFD54A7B1DC0@phx.gbl>
To: Adrien de Croy <adrien@qbik.com>
CC: HTTP Working Group <ietf-http-wg@w3.org>
On 12/12/13 12:00 PM, Adrien de Croy wrote:
> ------ Original Message ------
> From: "Yoav Nir" <synp71@live.com <mailto:synp71@live.com>>
>>
>> > And also for the record.
>> >
>> > Most of my customers would have a big problem with the proposal that
>> > connections to the ("trusted") proxy should be over TLS.
>>
>> It's a must for a decrypting proxy that will do "GET https://";  less 
>> so for a proxy that does "CONNECT".
> but when a client makes a connection to the proxy for a GET http:// 
> would that also be over a TLS connection?

I think (and this is not an opinion about existing things - it's about 
future extensiosn) the client connects to the proxy with or without TLS. 
and then:

  * Without TLS:
      o GET http://  works
      o CONNECT  works if HTTPS inspection is disabled or voluntary,
        denied if HTTPS inspection is mandatory
      o GET https://  is denied
  * With TLS:
      o GET http://  works, although policy may cause it to be denied to
        save resources. This only gives you privacy from your
        co-workers, not the Internet, which is kind of weird.
      o CONNECT  works if HTTPS inspection is disabled or voluntary,
        denied if HTTPS inspection is mandatory
      o GET https://  works

So you might end up with two connections, one for stuff that needs 
inspection and the other for stuff that doesn't.

> This is a large proportion of surfing currently.  To move this to TLS 
> (even if only TLS to proxy) will be a considerable increase in load.
> I agree difference between TLS to proxy + GET https:// vs MitM is not 
> much (cert is cached after generation anyway).  But currently https 
> traffic is only a fraction of total traffic (albeit increasing thanks 
> to FB and Google).

I think FB and Google will lead the way. Before Google moved webmail to 
HTTPS, the others made you do your mail in the clear. Now they're all 
https. When Google move youtube to https (which some Google people said 
they were planning), could the other content providers be far behind?
>>
>> > For many of them the proxy is already working the hardware quite hard
>> > (either old hardware or high-end). To reduce capacity by 75% or more
>> > just by making everything TLS would mean they would all need to go get
>> > new or extra hardware for their proxy. I foresee a lot of 
>> resistance to
>> > this.
>>
>> Capacity reduction depends on what the proxy is doing. Malware 
>> scanning is so onerous, that the TLS part will be lost in the noise, 
>> because handshakes will be rare. Caching or simple URL filetering is 
>> lighter, so the capacity may be reduced by as much as you say.
> Actually you'd be surprised about load from malware scanning as well.  
> With normal content-type based whitelisting policy, only a fraction of 
> content is actually scanned.
> Basic testing when we put in an https reverse proxy showed an 
> encrypted connection took considerably more CPU than a plaintext one.  
> Probably 10x as much.  So my 75% was optimistic.

My opinion may be skewed by the setup in my company. Since we develop 
it, all manner of scanning are enabled on our gateway (far more than a 
customer would do), and the CPU resources spent on scanning for malware 
and identifying bots dwarf those spent on encryption and decryption.

>> MitM proxies already do that much work, except that they also sign 
>> fake certificates. Their load might even decrease.
> Agree for current https, but not for current http.
>>
>> > I don't see why the client needs to auth the proxy inside a private
>> > network.
>>
>> Because people bring all kinds of stuff to the private network. 
>> That's a direct result of making computers smaller than this:
>> http://www.tcf.ua.edu/Classes/Jbutler/T389/RailroadComputer1967.jpg
>>
>> If someone (or some bot) fools your computer to use it as a proxy, it 
>> gets access to all your HTTP content, and all your HTTPS meta-data, 
>> even without being a decrypting proxy. That is why authentication is 
>> needed.
> We have that problem now then for http, why is noone doing anything 
> about it?
> I think such problems deserve a solution outside http.
>
Regardless of how you configure it, you'll need to authenticate the 
proxy, otherwise you're vulnerable to all kinds of attacks. It's one of 
those threats that you can decide are not worth the effort to fix, 
because it's rare and the cost of authentication is onerous, or 
meta-data about HTTPS is not all that interesting, but the problem is 
still there.

Yoav



Received on Thursday, 12 December 2013 11:46:19 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:20 UTC