W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Some proxy needs

From: Nicolas Mailhot <nicolas.mailhot@laposte.net>
Date: Sun, 8 Apr 2012 14:14:15 +0200
Message-ID: <3dfc2c17927267e41710084836183f71.squirrel@arekh.dyndns.org>
To: ietf-http-wg@w3.org
I'll attempt a quick summary of what's been written on the list those past weeks

A proxy needs:

1. discoverability, to handle network guests (right now taken care of wpad+pac
though a lot of clients do not handle those, and they are not really
satisfactory I guess). Automated discoverability is not the same thing as
applying those blindly in the web client behind the user back. The web client
is free to try a direct connection anyway, but if it fails it should notify
the user of a proxy presence and have it accept it, or not. Right now no
discoverability means settings are copied blindly by users from one web client
to another, and I fail to see how that improves security.

2. a way to signal the web client a specific URL access (any access at any
time) needs proxy authorization, and where this authorization happens (on
complex network topologies there is no single auth portal that works for every
access). Probably a way to communicate the terms of service associated with
this gateway, in a non-spoofable way. To avoid spoofing this requires specific
web client chrome.

3. a way to signal the web client a request is being processed (there is no
way a multi-GB iso is going to pass through the anti-malware system
instantaneously, and users will press retry if the download bar does not move
after a few seconds)

4. A way to inspect most of the client communication for malware. I say most
because :
 a. a lot of operators would probably accept not to look at the user dialog
with bank sites or webmails operated by entities with known good security
(such as gmail)
 b. though not ideal, it would be an acceptable risk not to look at
authentication dialogs as long as they were too small to be an efficient
malware or botnet channel (I suppose some rate-limiting would need to occur

5. a way for distribution sites to signal a resource is duplicated on a CDN,
and what the root resource is (systems like sourceforge are killing caching,
every new request is redirected to a different distribution server)

1 needs signing for any user white-listing to be secure

2-3 need out-of-band communication, probably multiplexed with the main stream,
so there is no doubt in the web client what message originates from the web
site and what message originates from the proxy. And the proxy messages need
at least signing (to avoid untrusted interception), and the proxy auth needs
full encrytion.

4. means there would be at least two levels of communication security:
 a. communication in clear
 b. crypted communication, with a middleman authorized to look at it (what
Willy proposes)

and ideally
 c. crypted end-to-end communication (but it needs clear separation from b.
and will be subject for specific proxy whitelisting, and probably require
proxy auth)

Except maybe for 1. everything else seems in http/2 scope to me.

Nicolas Mailhot
Received on Sunday, 8 April 2012 12:14:51 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:00 UTC