Re: The future of forward proxy servers in an http/2 over TLS world

On 02/16/2017 11:25 AM, Tom Bergan wrote:

> You started by stating, without proof, that proxies are needed to block
> requests.

Adrien did not state that at all! He actually stated that

  * proxies are used to block requests;
  * blocking requests is a critical proxy purpose;
  * blocking by proxy becomes increasingly difficult or even impossible
    due to ongoing protocol changes

All are well-known facts that do not require a proof, I hope.

[ If you are implying that requests should never be blocked or should
only be blocked by user agents, then I hope that other folks on the
mailing list can prove you wrong without appearing to be as biased as a
proxy developer would. ]

> What are you actually trying to do

We are blocking requests that violate some policy (the actual policies
vary from use case to use case, of course). What we are trying to do is
to convince browser folks that they should find a good way to show proxy
errors to their users.

> and why does that require a proxy?

Blocking does not require a proxy, but proxies have often been the best
tool for the job for many years and continue to be the best tool for the
job in many cases today.

> Do your users opt-in to the feature, 

Yes, of course. Any use of a proxy without client consent is outside
this discussion scope.

> How much control do you have over the user's machine?

The level of control varies from "it is our machine, not the user's" to
"we kindly suggest that users follow documented configuration steps for
their machine". Often, we deal with a complex mixture of control levels.

> Is your proxy installed on the user's machine (like
> an anti-virus), on end-user routers, in ISPs, or in a third party? 

All of the above and more, depending on the use case.

> Are you trying to block hosts or specific URLs? 

Depends on the policy, and those are not the only two options. For
example blocking based on an outdated SSL version used by the client or
a recently revoked certificate sent by the server is not uncommon.

> Do you need to inspect the
> body of the request or the response, or the headers, or just the URL?

For the purpose of this discussion, we are blocking based on all
available unencrypted (from proxy point of view) information such as an
HTTP CONNECT request header. The more information the proxy gets, the
more accurate its blocking decision can become.

Do some policies benefit from knowing specific URLs? Sure! Does blocking
become unnecessary when those URLs become hidden from the blocker? No.
It just becomes less precise, more arbitrary, etc.

There are other sources of information, of course, but they are outside
this discussion scope.

> How do we know you're trying to do something that's even feasible in the
> first place?

We know it is feasible because it has been done. Unfortunately, the
working solutions become uglier and uglier for everybody involved, from
users to admins to developers. Adrien is

* asking whether blocking requests by proxy is still a valid HTTP use
case from this WG point of view

* begging for a specific browser enhancement that would solve many
blocking problems and provide a pathway to more solutions down the road.



> On Tue, Feb 14, 2017 at 2:37 PM, Adrien de Croy wrote:
>     __
>     At the moment, it feels like the functions provided by proxy servers
>     are being squeezed out by changes in the protocol.
>     I can understand the desire for privacy, and we've had the argument
>     about whether it should be available to all or not too many times
>     already.
>     However, there are other functions that a proxy is commonly used for
>     that are becoming impossible with the direction TLS, HTTPS HSTS,
>     cert pinning etc are going.
>     Whilst I can understand a desire and need for privacy, an ability to
>     be able to go to a website without betraying which site you're going
>     to (e.g. see
>     <>) there's
>     probably 1 remaining IMO critical bona fide purpose for a proxy
>     which is becoming very problematic for users.
>     Blocking requests.
>     So, do we feel there is still a place for blocking requests?  Our
>     customers still certainly want this.
>     Currently the user experience is either appalling (generic
>     connectivity failure report which wastes a lot of user time), or
>     requires deployment of a MitM, which is being squeezed out as well. 
>     We should be able to do better, but it doesn't appear to be being
>     addressed at all, and the gulf is widening.
>     I believe we need to put some time into working out how we can allow
>     a proxy to block requests without an awful user experience that
>     costs users and tech support countless hours to deal with.
>     This means we have a need to be able to respond to CONNECT with a
>     denial, and some kind of message that can be displayed to the user.
>     It may be that the only way this can be achieved is by the concept
>     of a trusted proxy. 
>     Otherwise if the group consensus is that requests should not be
>     blocked, we need to deal with the consequences of that.
>     Adrien
>     P.s. another key feature is caching, but that is becoming less
>     useful anyway.  Customers can often live without caching, they do
>     not tolerate being unable to block however.

Received on Thursday, 16 February 2017 19:47:44 UTC