W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: multiplexing -- don't do it

From: Nicolas Mailhot <nicolas.mailhot@laposte.net>
Date: Fri, 6 Apr 2012 17:12:27 +0200
Message-ID: <50b278cb647638c66ee1db0fe1bf8488.squirrel@arekh.dyndns.org>
To: "William Chan (陈智昌)" <willchan@chromium.org>
Cc: "Nicolas Mailhot" <nicolas.mailhot@laposte.net>, ietf-http-wg@w3.org

Le Ven 6 avril 2012 16:30, William Chan (陈智昌) a écrit :
> On Fri, Apr 6, 2012 at 4:00 PM, Nicolas Mailhot <nicolas.mailhot@laposte.net

>> And yet none of those vendors though twice before disabling https
>> redirects,
>> even though it was known they were widely used by proxies and captive
>> portals,
>> and no replacement was proposed, and it subjected and still subjects a
>> non-trivial number of proxy users to hangs, data corruption, and
>> other errors.
> I don't think this is relevant to the http/2.0 discussion. I'm happy to
> have this discussion, but perhaps you should start another thread.

It is very relevant to this discussion. Because of this browser decision,
https is currently broken in authenticating gateway environments. That is one
of the pain points http/2 should fix.

> I don't know what you mean by disabling https redirects...

Once upon a time, when a user navigated on an https web site, an intermediary
hop could send a redirection to its own authentication gateway to autorise the
user. And then browsers decided such redirects where evil (they could also be
used for MITM) and started to ignore them without providing any other
mechanism to handle gateway auth.

> I think you mean
> clients do what they are supposed to do with https URLs - verify the
> server's certificate, which generally prevents these captive portals from
> MITM'ing the connection. I understand this causes problems for captive
> portal vendors, but I don't think it's valid to complain that clients are
> correctly implementing https.

That's very well written, except browsers deliberately broke a widely-deployed
use case, and didn't propose anything to replace it (and no one can replace it
without browser cooperation).

You won't get any sympathy from all the proxy admins which have been yelled at
those past years because some user got blocked while browsing an https web
site and could not understand the proxy had no way to get the f* browser to
display the normal proxy auth page (the biggest price goes to Firefox that in
addition to blocking the proxy redirects complains to the user the proxy is
blocking the connection and it does not know why)

> I think captive portal vendors should come up
> with a real proposal instead of relying on hacks.

The proposal has been made many times in browser bug trackers. It's always
1. web client requests a web page
2. gateway responds web client is not authorized (or authorized anymore) to
access this url, and specifies the address of its authentication page
3. web client displays this address (if it's a dumb client like curl) or
renders it (if it's a browser)
4. user authenticates
5. web client retries its first request and now it works

Happiness ensues as the user gets its page, the admin is not yelled at, and
corporate filtering is enforced.

The only thing changing between proposals are the error codes or field names,
the basic design is always the same.

And no various "I'm on the internet" test pages do not work since they do not
map to network topology nor to the filtering rules enforced by the gateway. In
fact such a page when implemented has a high chance to get summarily
blacklisted since browsers like to hammer it for no good reason all the time.

Nicolas Mailhot
Received on Friday, 6 April 2012 15:13:01 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:00 UTC