W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2007

Re: protocol support for intercepting proxies

From: Travis Snoozy <ai2097@users.sourceforge.net>
Date: Sun, 17 Jun 2007 22:26:42 -0700
To: Adrien de Croy <adrien@qbik.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20070617222642.19cd9d0e@localhost>

On Mon, 18 Jun 2007 15:15:09 +1200, Adrien de Croy <adrien@qbik.com>

> Travis Snoozy wrote:
> <snip>
> None of this addresses the fact that
> a) intercepting proxies exist, and exist currently unsupported by the
> spec

"Intercepting" is merely TCP/IP sleight-of-hand to misdirect/redirect
traffic at the network layer. Because the sleight-of-hand is invisible
to the UA (and any other clients downstream of the invisible proxy), it
has no way of knowing if there's _really_ an invisible proxy between it
and the origin server, or if the origin server or some other
intermediary just wants to hijack the user's session.

Scenario 1 ("what we want"):

UA:   Hey, I want!
Proxy pretending to be
      I'm not really; you need to add as a
      normal HTTP proxy to get there.
UA:   Uh... is that okay, user?
User: ... suuure?

Scenario 2 ("what we don't want"):

UA:    Hey, I want!
      Um... I'm not really; you need to add
      (totally not a compromised server!) as a proxy to get
UA:   Uh... is that okay, user?
User: ... suuure?

So, money question: how do we tell these two scenarios apart? That is
the problem that needs to be solved in a secure, airtight fashion. It's
not intractable (the immediate thing that comes to mind is allowing
the redirect to affect the next request on the same address only --
which smells exactly like 305), but it's certainly unpleasant. It may
be more work than just instructing the client to configure their
browser to use the appropriate proxy via a 5xx message.

> b) customers want to be able to *enforce* HTTP policy on their
> corporate networks,

And ponies ;). But seriously, if you're talking about content filtering
(which I assume you are), "enforced HTTP policy" is about as achievable
as "unbreakable DRM." The Chinese government has sunk untold amounts of
money into their Great Firewall, and it's still routinely circumvented.

Most filtering can be accomplished with a combination of blacklisted
IPs, blacklisted DNS entries that fall under certain domains (with
reverse DNS lookup on IPs), blocked outgoing traffic on certain ports,
and blocked incoming traffic on all ports (with an optional DMZ) -- you
just need a firewall for all of this. Anything more (esp. trying to
filter at the application level, like HTTP) is pretty much a waste of

> c) customers don't like to have to pay sys admins to do things they
> can get around with technology.

Technology has to be set up by someone. "Turnkey" is hard, hard stuff
in the _home_ network arena, let alone corporate. Yeah, I'm sure that
folks would like to be able to drop a big old SAN box on their network,
and have everything on the network that can see the SAN just start doing
backups magically. I'm sure that it'd be wonderful to just drop a
filtration box on the network, and have no more porn or games being hit
up on work time, and have everything be cached and fast. But, as nice
as that would be, it's totally insecure, and unfit for the enterprise**.

> d) the more links you put in a chain, the more chance one of them
> will break.

I argue that this service is likely to be more complex implemented in
HTTP than it is to be implemented in a different protocol (like DHCP).
It's likely to be more of a security risk, and cause more trouble in
total than if it's implemented at a non-routeable fashion. While
implementing in HTTP involves one fewer protocol from your perspective,
and may make your job easier, it does not remove a link from the
overall chain (i.e. DHCP is still present and used), or actually reduce
the complexity at all (browsers have to implement a new feature; the
HTTP spec gets even bigger; still have to implement the "bad"
backwards-compatible way).

> I'm simply proposing that it might be appropriate that *something* be 
> done about this.

Oh, indeed, something should be done about it; as you've said, it's
common enough in practice. The question is, where is the most
appropriate place for that something to be? The secondary question is,
even if we want to do that something in HTTP, is it reasonably possible
to accomplish it in a sane and secure manner, without disrupting a
substantial portion of the spec***? The ternary question is,  might we
just write another RFC that specifies the way in which such proxies
(and optionally, user agents) should behave to achieve maximum
interoperability, and simply admit that it's not _really_ HTTP, but
"mostly HTTP compatible"? The final question is, regardless of the
chosen solution, who's going to drive all the work to make sure it gets
done? :)


* Not, mind you, that your customers care... as mentioned, it doesn't
stop China from continuing to spend gobs of money. Just make sure that
you have such customers, before putting a lot of effort in.

** Until we can swipe a smart card through it, have the HW's built-in
cert get signed, and through that let all the equipment know that the
HW's been authorized to perform the task it advertises. Mmm... pardon
me whilst I revel in that pipe dream of massive standardization,
interop and ease.

*** Though I'll be the first to tell you I'd like to see the whole
thing rewritten, disruptive changes will be hard, if not impossible, to
get through. O, how I pine for thee, HTTP/2.0!
Received on Monday, 18 June 2007 05:26:50 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:42 UTC