W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2014

Re: new version trusted-proxy20 draft

From: Paul Ferguson <fergdawgster@mykolab.com>
Date: Mon, 24 Feb 2014 15:06:18 -0800
Message-ID: <530BD06A.9070408@mykolab.com>
To: Salvatore Loreto <salvatore.loreto@ericsson.com>
CC: HTTP Working Group <ietf-http-wg@w3.org>

On 2/24/2014 11:57 AM, Salvatore Loreto wrote:

> On Feb 20, 2014, at 2:40 AM, William Chan (陈智昌) <willchan@chromium.org> wrote:
>> On Wed, Feb 19, 2014 at 1:17 PM, Salvatore Loreto
>> <salvatore.loreto@ericsson.com> wrote:
>>> On Feb 19, 2014, at 7:09 PM, William Chan (陈智昌) <willchan@chromium.org> wrote:
>>>> Yeah, I'd like to see the "secure proxy" proposal separated out from
>>>> the "trusted proxy" proposal. Let's move forward on the "secure proxy"
>>>> one. I think the "trusted proxy" proposal is more complicated.
>>> I agree
>>> and the draft is really proposing a "secure proxy" solution
>>> in line with your definition of "secure proxy"
>>> indeed we are only proposing the possibility for the proxy to ask consent
>>> to opt in for http:// resources traffic
>> Let's be clear, these are two different things. There's "secure proxy"
>> which is securing the connection between the proxy and the client. I'm
>> supportive of standardizing this. Then there's this opting into
>> allowing http:// resources to be sniffed by signaling it via ALPN.
>> What's the value proposition here? Why not issue the request to the
>> proxy if you want to let it see it, just like we do for configured
>> HTTP proxies?
> The value proposition here is that the user-agent does not have to be configured to use the proxy, but is still able to take advantage of the benefits it can provide.
> Think about the situation where you're on vacation in a remote area with limited network resources. You wish to download an application.  The network you're on has a caching proxy with the application cached.  You are able to download the application faster, without tying up the resources in the remote location.  That's the value proposition.
> I have also tried to explain those benefit here [1]
> And yes, the response is "well why not just configure the user-agent to use the caching proxy?" Technically, this suggestion is completely correct, but from a practical standpoint it makes no sense at all: users will not manually configure address http proxies.  
> What we're proposing here is providing better security for the user-agent (all content encrypted), preserving existing functionality (caching proxies, virus scanning proxies, etc), and providing a mechanism where the user is completely aware of any entity in the middle of their normal "http://" traffic.  
> Https traffic, is still there, providing private end to end encryption.

I do not believe that to be true (end to end encryption) if a proxy is
removing SSL headers and then reapplying them. Or do I misunderstand?

If that is the case, then data is in clear text at the proxy between
rewriting SSL headers?

- ferg

> br
> Sal
> [1]http://lists.w3.org/Archives/Public/ietf-http-wg/2014JanMar/0602.html

Paul Ferguson
VP Threat Intelligence, IID
PGP Public Key ID: 0x54DC85B2
Received on Monday, 24 February 2014 23:06:52 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:24 UTC