Re: protocol support for intercepting proxies

On Mon, 18 Jun 2007 10:23:35 +1200, Adrien de Croy <adrien@qbik.com>
wrote:

<snip>
> It is very appealing for system administrators to install an 
> intercepting proxy, as it "solves" the issue of client browser 
> configuration.  Sure, there are many other methods of auto-proxy 
> configuration, but these all rely on ancillary systems and extra 
> sys-admin (and sometimes customer) knowledge (i.e. DHCP option 242, 
> and/or DNS WPAD lookup).
<snip>
> Another option would be a warning code to indicate the connection had
> been intercepted.  I believe system administrators would wish to be
> able to configure how to deal with the case from a number of options
> including
>
> 1. Allow the clients to operate through the intercepting proxy
>     - with notification
>     - silently
> 2. Force the clients to re-connect to the proxy and issue requests
> with proxy semantics.
<snip>

What I'm getting from the first part is: admins don't want to configure
the browser. The second part, though, seems to say admins want to
configure the browser. Isn't this a little conflicted?

> Vendors who implement intercepting proxies do so without much (if
> any) support from the spec, and so there are issues encountered
> which end up being solved by trial and error or best guess or in some
> cases are not even solvable (not reliably or properly anyway).
> 
> For instance the issue when a proxy intercepts connections then
> wishes to force the UA to authenticate to the proxy.
>
> This is a really common scenario.

And it's been solved for some time now, practically speaking[1]. Also,
one should speak about authenticating to the _network_, not the proxy --
the proxy simply provides the service for authentication. It's not an
"HTTP proxy" per se, so much as a TCP/IP filtering application that
happens to speak HTTP + HTML, because most users have something that
can speak HTTP + HTML. Actual HTTP-level filtering and/or caching is
another story altogether, but I'm sticking with simple authentication
for now.

<snip>
> Given that the problem is not going to go away because people are not 
> going to want to stop using intercepting proxies, wouldn't it be
> better if there was some proper protocol support for the concept?

Yes, but what about backwards compatibility? The proxy still needs a
way to let browsers that *don't* implement such extensions to
authenticate and work properly. We wind up with a chicken/egg problem,
and we still have to solve the original issue with the existing
infrastructure, regardless.

> UAs at the moment don't generally know if their connections are being 
> intercepted.  If they knew, then they could;
> * let the user know connections were being intercepted
>     - ameliorates issues relating to privacy

So long as the proxy-operator wants them to know, and is a decent human
being, and the software supports it .

>     - helps users decipher errors better (i.e. upstream connection
> failure)

A good error message from the proxy should be adequate for this ("500
failed to connect to server at example.org"). Alternately, one could
pass the TCP/IP issues directly through (e.g., if the connection timed
out, let it time out on the client; if it was reset, reset it, etc.).
What about identifying the proxy with the client would help?

>     - leads towards possible user-control over whether their traffic
> may be intercepted or not

See prior comment about proxy op being a decent human being. Also, the
options in this scenario are go through the forced proxy, or don't get
Internet access -- a security policy wouldn't be very helpful. Users
should *always* assume their traffic is monitored (esp. on business &
school networks, where this type of proxy is likely to occur), and
vary their browsing habits based on that assumption. Users interested
in getting around eavesdroppers should already be using technologies
like Tor, VPNs, anonymizing SOCKS proxies, etc.

> * cooperate better with the proxy.
>     - move to a proxy-oriented protocol operation (can fix many
> issues, such as auth)

Yet another proxy-discovery technology -- but why? How are legacy
browsers going to cope?

>     - deal with errors differently

Examples?

> I believe that this could be achieved with either a status code, or a 
> header, where an intercepting proxy could signal to a client that it 
> intercepted the connection.  The proxy could even intercept
> connections and enforce the UAs to use a proxy method, provide a
> proxy URI in the response that the UA must use.

So, instead of the admins putting the infrastructure in to allow
auto-config, we'll force the end-users to do it themselves? That seems
kind of backwards.

> This is another case where 305 would/could have been useful.  Another
> option would be a warning code to indicate the connection had been
> intercepted.  I believe system administrators would wish to be able
> to configure how to deal with the case from a number of options
> including
> 
> 1. Allow the clients to operate through the intercepting proxy
>     - with notification
>     - silently
> 2. Force the clients to re-connect to the proxy and issue requests
> with proxy semantics.

Why bother? What would either of these points gain us?

Some solid examples/scenarios would be really handy for illustrating
the issues you're coming up against. I can vaguely see where you're
going (drop a browser on the network, have it auto-configure by virtue
of simply getting routed through the proxy, with no extra setup), but I
don't see any really big win, especially when other technologies
(zeroconf, UPnP, etc.) have been explicitly written to solve the problem
generically.


-- 
Travis

[1] http://en.wikipedia.org/wiki/Captive_portal

Received on Sunday, 17 June 2007 23:35:19 UTC