W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2007

Re: protocol support for intercepting proxies

From: Adrien de Croy <adrien@qbik.com>
Date: Mon, 18 Jun 2007 18:24:53 +1200
Message-ID: <46762535.5020001@qbik.com>
To: HTTP Working Group <ietf-http-wg@w3.org>

Travis Snoozy wrote:
> On Mon, 18 Jun 2007 15:15:09 +1200, Adrien de Croy <adrien@qbik.com>
> wrote:
>> Travis Snoozy wrote:
>> <snip>
>> None of this addresses the fact that
>> a) intercepting proxies exist, and exist currently unsupported by the
>> spec
> "Intercepting" is merely TCP/IP sleight-of-hand to misdirect/redirect
> traffic at the network layer. Because the sleight-of-hand is invisible
> to the UA (and any other clients downstream of the invisible proxy), it
> has no way of knowing if there's _really_ an invisible proxy between it
> and the origin server, or if the origin server or some other
> intermediary just wants to hijack the user's session.
> Scenario 1 ("what we want"):
> UA:   Hey, I want!
> Proxy pretending to be
>       I'm not really; you need to add as a
>       normal HTTP proxy to get there.
> UA:   Uh... is that okay, user?
> User: ... suuure?
> Scenario 2 ("what we don't want"):
> UA:    Hey, I want!
>       Um... I'm not really; you need to add
>       (totally not a compromised server!) as a proxy to get
>       there.
> UA:   Uh... is that okay, user?
> User: ... suuure?
I think this has been dragged a bit off topic - I'm sure it's my fault
for making a suggestion as to "how" instead of sticking to the "why".

The thing to resolve first of course is the why: is this enough of a
problem to bother doing anything about?  The what and the how comes after.

My opinion based on fielding support queries on this for the last 12
years is that it is significant for users of HTTP.

And what are we here for, why do anything to HTTP at all if it's not
going to in the end benefit the users of HTTP?

> So, money question: how do we tell these two scenarios apart? That is
> the problem that needs to be solved in a secure, airtight fashion. It's
> not intractable (the immediate thing that comes to mind is allowing
> the redirect to affect the next request on the same address only --
> which smells exactly like 305), but it's certainly unpleasant. It may
> be more work than just instructing the client to configure their
> browser to use the appropriate proxy via a 5xx message.
or 4xx message as I originally also suggested.

Anyway, let's not at this stage assume that this would be an impossible

>> b) customers want to be able to *enforce* HTTP policy on their
>> corporate networks,
> And ponies ;). But seriously, if you're talking about content filtering
> (which I assume you are), "enforced HTTP policy" is about as achievable
> as "unbreakable DRM." 
enforced HTTP policy is a lot more achievable than unbreakable DRM.
When you have control over every packet going through the firewall, you
can do quite a lot.

> The Chinese government has sunk untold amounts of
> money into their Great Firewall, and it's still routinely circumvented.

maybe they need WinGate :)

> Most filtering can be accomplished with a combination of blacklisted
> IPs, blacklisted DNS entries that fall under certain domains (with
> reverse DNS lookup on IPs), blocked outgoing traffic on certain ports,
> and blocked incoming traffic on all ports (with an optional DMZ) -- you
> just need a firewall for all of this. Anything more (esp. trying to
> filter at the application level, like HTTP) is pretty much a waste of
> effort*.

I've a lot of customers and distributors and competitors who would
strongly disagree with you on that one.  Several companies I know of are
making their entire livelihood out of the fact that their customers deem
HTTP filtering to be worth-while.

>> c) customers don't like to have to pay sys admins to do things they
>> can get around with technology.
> Technology has to be set up by someone. "Turnkey" is hard, hard stuff
> in the _home_ network arena, let alone corporate. Yeah, I'm sure that
> folks would like to be able to drop a big old SAN box on their network,
> and have everything on the network that can see the SAN just start doing
> backups magically. I'm sure that it'd be wonderful to just drop a
> filtration box on the network, and have no more porn or games being hit
> up on work time, and have everything be cached and fast. But, as nice
> as that would be, it's totally insecure, and unfit for the enterprise**.

That's a pretty bold assertion.  I think you'd have trouble convincing
customers about that.

>> d) the more links you put in a chain, the more chance one of them
>> will break.
> I argue that this service is likely to be more complex implemented in
> HTTP than it is to be implemented in a different protocol (like DHCP).
> It's likely to be more of a security risk, and cause more trouble in
> total than if it's implemented at a non-routeable fashion. While
> implementing in HTTP involves one fewer protocol from your perspective,
> and may make your job easier, it does not remove a link from the
> overall chain (i.e. DHCP is still present and used), or actually reduce
> the complexity at all (browsers have to implement a new feature; the
> HTTP spec gets even bigger; still have to implement the "bad"
> backwards-compatible way).
We're getting caught up too soon in implementation details, and the
discussion doesn't deserve to be there yet.

Every parameter you assign with DHCP increases complexity.  After IP,
Default Gateway, mask and DNS you start to get into areas (e.g. option
252) which aren't universally implemented by DHCP servers.  Someone with
a network set up with manually assigned IP addresses won't benefit from
DHCP here either, so there are cases where DHCP is a real problem.  Same
with DNS.  There's no sense to it sometimes, but it doesn't help much to
tell customers they are senseless.. you tend to lose their business.

>> I'm simply proposing that it might be appropriate that *something* be 
>> done about this.
> Oh, indeed, something should be done about it; as you've said, it's
> common enough in practice. The question is, where is the most
> appropriate place for that something to be? 
good question.

> The secondary question is,
> even if we want to do that something in HTTP, is it reasonably possible
> to accomplish it in a sane and secure manner, without disrupting a
> substantial portion of the spec? The ternary question is,  might we
> just write another RFC that specifies the way in which such proxies
> (and optionally, user agents) should behave to achieve maximum
> interoperability, and simply admit that it's not _really_ HTTP, but
> "mostly HTTP compatible"? The final question is, regardless of the
> chosen solution, who's going to drive all the work to make sure it gets
> done? :)
these are all good questions, and I think this area is where the
discussion should be.

I personally believe that something should be done to recognise the
existence of intercepting proxies, and that the main problems that
exist, such as authentication issues deserve to be looked at.  Whether
the best solution is something to do in HTTP, or something else is
another matter, but we shouldn't be prejudiced.

The issues around a client using server vs proxy request semantics (i.e.
sending full URI, Proxy-Connect, Proxy-Auth tags vs partial URI,
Connect, and WWW-auth tags) can only be resolved if the UA knows it's
going through a proxy.

At the moment, the spec only contemplates explicit proxies (proxies that
are known about by their clients).  So it stands to reason that there is
a place for an intercepting proxy to upgrade its legitimacy by letting
the UA know it is there.  It doesn't matter that some people wont want
this feature.  Some people will.  Passing moral judgements over the
motives of those who may wish their transparent proxies to remain
invisible isn't particularly useful.

As to what happens once a UA learns of the existence of an intercepting
proxy that is happy to be "found out", that's yet another stage.
Resolving trust issues allows a proxy-auto-config type benefit, but
there are likely other benefits as well.  If you are concerned about the
trust issues, try re-working your sample above with more meaningful IP
addresses, like in sample 1 try using a private IP, and in 2 a public
one.  Sure private vs public IPs are crude forms of trust, but given the
un-routable nature of private IPs on the internet, it achieves security
in the same mechanism as you claim for DHCP.  More trust can be obtained
by other methods, say for instance SSL with a certificate.  I think it's
too soon to conclude that trust cannot be achieved.

How a UA learns of the existence of an intercepting proxy (i.e
standards-based notification vs heuristic forensics) impinges a lot on
what can be done with that knowledge.

Also, as other have said, there are currently some artefacts left in
communications by intercepting proxies.  Currently there's no UA that I
know of that lets users know they are going through an intercepting
proxy.  Another area where the spec could acknowledge the existence of
intercepting proxies would be to provide guidelines to UA developers in

a) how to detect the presence of intercepting proxies
b) whether or how detected presence of an intercepting proxy should be
highlighted to the users.

So, there's 2 areas where I think contemplation of intercepting proxies
could provide tangible benefits.  There are probably others.

However, if everyone feels this is not something that is worth pursuing,
then fine, I'll carry on doing it my own way, as will every other vendor
of intercepting proxies, and maybe someone that writes UAs will do
something, and hopefully we will all drift in the same direction, but
probably not :)




Adrien de Croy - WinGate Proxy Server - http://www.wingate.com
Received on Monday, 18 June 2007 06:24:42 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:42 UTC