W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2007

Re: protocol support for intercepting proxies

From: Adrien de Croy <adrien@qbik.com>
Date: Mon, 18 Jun 2007 12:03:19 +1200
Message-ID: <4675CBC7.2020408@qbik.com>
To: Travis Snoozy <ai2097@users.sourceforge.net>
CC: HTTP Working Group <ietf-http-wg@w3.org>

Travis Snoozy wrote:
> On Mon, 18 Jun 2007 10:23:35 +1200, Adrien de Croy <adrien@qbik.com>
> wrote:
> <snip>
>> It is very appealing for system administrators to install an 
>> intercepting proxy, as it "solves" the issue of client browser 
>> configuration.  Sure, there are many other methods of auto-proxy 
>> configuration, but these all rely on ancillary systems and extra 
>> sys-admin (and sometimes customer) knowledge (i.e. DHCP option 242, 
>> and/or DNS WPAD lookup).
> <snip>
>> Another option would be a warning code to indicate the connection had
>> been intercepted.  I believe system administrators would wish to be
>> able to configure how to deal with the case from a number of options
>> including
>> 1. Allow the clients to operate through the intercepting proxy
>>     - with notification
>>     - silently
>> 2. Force the clients to re-connect to the proxy and issue requests
>> with proxy semantics.
> <snip>
> What I'm getting from the first part is: admins don't want to configure
> the browser. The second part, though, seems to say admins want to
> configure the browser. Isn't this a little conflicted?
Admins don't want to individually configure thousands of individual 

They wish to configure browsers centrally with a single central 
configuration setting.

>> Vendors who implement intercepting proxies do so without much (if
>> any) support from the spec, and so there are issues encountered
>> which end up being solved by trial and error or best guess or in some
>> cases are not even solvable (not reliably or properly anyway).
>> For instance the issue when a proxy intercepts connections then
>> wishes to force the UA to authenticate to the proxy.
>> This is a really common scenario.
> And it's been solved for some time now, practically speaking[1]. 

largely, but there are some significant warts on it.

WinGate for example does the hideous juggling act of intercepting 
connections, running NTLM auth over them, then allowing the same request 
to go through to an origin server that then also requires NTLM auth.

But it's not pretty, or particularly robust.  There are browser variations.

> Also,
> one should speak about authenticating to the _network_, not the proxy --
> the proxy simply provides the service for authentication. It's not an
> "HTTP proxy" per se, so much as a TCP/IP filtering application that
> happens to speak HTTP + HTML, because most users have something that
> can speak HTTP + HTML. Actual HTTP-level filtering and/or caching is
> another story altogether, but I'm sticking with simple authentication
> for now.
lost me there.

> <snip>
>> Given that the problem is not going to go away because people are not 
>> going to want to stop using intercepting proxies, wouldn't it be
>> better if there was some proper protocol support for the concept?
> Yes, but what about backwards compatibility? The proxy still needs a
> way to let browsers that *don't* implement such extensions to
> authenticate and work properly. 
if a 400 series code came back from an intercepting proxy, with a page 
saying "you need to configure your browser to use a proxy", plus a 
header field with the URI of the proxy, if the client trusted the source 
of this message, a compliant client could automatically retry the 
request to the proxy, even ask the user if they wish to set their 
browser to use this proxy for future requests. A non-compliant browser 
would show the message.

> We wind up with a chicken/egg problem,
> and we still have to solve the original issue with the existing
> infrastructure, regardless.
>> UAs at the moment don't generally know if their connections are being 
>> intercepted.  If they knew, then they could;
>> * let the user know connections were being intercepted
>>     - ameliorates issues relating to privacy
> So long as the proxy-operator wants them to know, and is a decent human
> being, and the software supports it .

or is forced by privacy legislation to do so.  Several countries I know 
of have quite advanced privacy regulations concerning internet traffic, 
i.e. Italy.

>>     - helps users decipher errors better (i.e. upstream connection
>> failure)
> A good error message from the proxy should be adequate for this ("500
> failed to connect to server at example.org"). Alternately, one could
> pass the TCP/IP issues directly through (e.g., if the connection timed
> out, let it time out on the client; 

You can't time out at the connection phase if you already moved past 
that phase by accepting the connection.

The whole advantage of caching intercepting proxies is to avoid 
connecting to the upstream origin server if possible.  So proxies have 
to accept the connection before the client will send the request, which 
the proxy needs to check the cache by which stage it's too late if the 
upstream connection fails.  All that can be done at that stage is 
present an error page to the client, at which stage they know 
immediately that their connection is being intercepted (if they are 
aware of the significance of such things).

> if it was reset, reset it, etc.).
> What about identifying the proxy with the client would help?

You'd definitely want some mechanism to assist the human to make the 
decision about whether or not to use the proxy.

>>     - leads towards possible user-control over whether their traffic
>> may be intercepted or not
> See prior comment about proxy op being a decent human being. Also, the
> options in this scenario are go through the forced proxy, or don't get
> Internet access -- a security policy wouldn't be very helpful. Users
> should *always* assume their traffic is monitored (esp. on business &
> school networks, where this type of proxy is likely to occur), and
> vary their browsing habits based on that assumption. Users interested
> in getting around eavesdroppers should already be using technologies
> like Tor, VPNs, anonymizing SOCKS proxies, etc.
Actually I can think of scenarios where this would be useful.

For instance our ISP intercepts all connections, but allows customers to 
opt-out of this (which we had to do in order to perform useful testing 
of WinGate).

The opt-out process of the ISP is something that consumes their 
resources.  A mechanism to allow a customer to opt-out would save the 
ISP resources.

Obviously this would be a configuration option, whether or not to allow 
users to opt-out or not (off by default for corporate proxies, possibly 
on for ISPs).

Most corporate gateways won't allow such simple bypassing of HTTP 
policy, and will block VPN, SOCKS etc.

>> * cooperate better with the proxy.
>>     - move to a proxy-oriented protocol operation (can fix many
>> issues, such as auth)
> Yet another proxy-discovery technology -- but why? How are legacy
> browsers going to cope?

why: because current ones rely on out-of-service solutions (i.e. DHCP / 
DNS).  It's possible to solve the problem completely in HTTP.  One stop 

Legacy browsers will cope as per mechanisms above.  I'm not proposing 
deprecating these other methods, although I think they would fall from 
grace with a decent HTTP implementation.

>>     - deal with errors differently
> Examples?

classic one being the upstream connection failure.

>> I believe that this could be achieved with either a status code, or a 
>> header, where an intercepting proxy could signal to a client that it 
>> intercepted the connection.  The proxy could even intercept
>> connections and enforce the UAs to use a proxy method, provide a
>> proxy URI in the response that the UA must use.
> So, instead of the admins putting the infrastructure in to allow
> auto-config, we'll force the end-users to do it themselves? That seems
> kind of backwards.

not end-users, UAs.

Don't forget that it's a configuration option on a browser whether or 
not to use proxy auto detection as well.

>> This is another case where 305 would/could have been useful.  Another
>> option would be a warning code to indicate the connection had been
>> intercepted.  I believe system administrators would wish to be able
>> to configure how to deal with the case from a number of options
>> including
>> 1. Allow the clients to operate through the intercepting proxy
>>     - with notification
>>     - silently
>> 2. Force the clients to re-connect to the proxy and issue requests
>> with proxy semantics.
> Why bother? What would either of these points gain us?

reduced burden on admins and users.

the links in the chain that can break for WPAD.

1. Browser not configured to use proxy auto detect
2. Client DNS issues
3. DNS server not providing decent records for WPAD lookups
4. WPAD URL not working (web server serving WPAD files not configured 
5. Some clients use DHCP option 252 for WPAD, not DNS.
6. DHCP implementation and configuration issues

There are quite a few links that can break, several of which are 
client-side. On a large network, supporting all this can be a huge 
burden on admins.

So, to solve the issue within HTTP would seem sensible?

> Some solid examples/scenarios would be really handy for illustrating
> the issues you're coming up against. I can vaguely see where you're
> going (drop a browser on the network, have it auto-configure by virtue
> of simply getting routed through the proxy, with no extra setup), but I
> don't see any really big win, especially when other technologies
> (zeroconf, UPnP, etc.) have been explicitly written to solve the problem
> generically.
UPnP is often seen as a security nightmare and turned off.  I don't know 
much about zeroconf.

the main win comes in sys admin work load, see above



Adrien de Croy - WinGate Proxy Server - http://www.wingate.com
Received on Monday, 18 June 2007 00:03:05 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:42 UTC