W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: multiplexing -- don't do it

From: Jamie Lokier <jamie@shareable.org>
Date: Sat, 7 Apr 2012 20:29:33 +0100
To: Nicolas Mailhot <nicolas.mailhot@laposte.net>
Cc: "William Chan (陈智昌)" <willchan@chromium.org>, ietf-http-wg@w3.org
Message-ID: <20120407192933.GA3240@jl-vm1.vm.bytemark.co.uk>
Nicolas Mailhot wrote:
> Le Ven 6 avril 2012 16:30, William Chan (陈智昌) a écrit :
> > On Fri, Apr 6, 2012 at 4:00 PM, Nicolas Mailhot <nicolas.mailhot@laposte.net
> >> And yet none of those vendors though twice before disabling https
> >> redirects,
> >> even though it was known they were widely used by proxies and captive
> >> portals,
> >> and no replacement was proposed, and it subjected and still subjects a
> >> non-trivial number of proxy users to hangs, data corruption, and
> >> other errors.
> >>
> >
> > I don't think this is relevant to the http/2.0 discussion. I'm happy to
> > have this discussion, but perhaps you should start another thread.
> It is very relevant to this discussion. Because of this browser decision,
> https is currently broken in authenticating gateway environments. That is one
> of the pain points http/2 should fix.
> > I don't know what you mean by disabling https redirects...
> Once upon a time, when a user navigated on an https web site, an intermediary
> hop could send a redirection to its own authentication gateway to autorise the
> user. And then browsers decided such redirects where evil (they could also be
> used for MITM) and started to ignore them without providing any other
> mechanism to handle gateway auth.
> > I think you mean
> > clients do what they are supposed to do with https URLs - verify the
> > server's certificate, which generally prevents these captive portals from
> > MITM'ing the connection. I understand this causes problems for captive
> > portal vendors, but I don't think it's valid to complain that clients are
> > correctly implementing https.
> That's very well written, except browsers deliberately broke a widely-deployed
> use case, and didn't propose anything to replace it (and no one can replace it
> without browser cooperation).
> You won't get any sympathy from all the proxy admins which have been yelled at
> those past years because some user got blocked while browsing an https web
> site and could not understand the proxy had no way to get the f* browser to
> display the normal proxy auth page (the biggest price goes to Firefox that in
> addition to blocking the proxy redirects complains to the user the proxy is
> blocking the connection and it does not know why)
> > I think captive portal vendors should come up
> > with a real proposal instead of relying on hacks.
> The proposal has been made many times in browser bug trackers. It's always
> basically:
> 1. web client requests a web page
> 2. gateway responds web client is not authorized (or authorized anymore) to
> access this url, and specifies the address of its authentication page
> 3. web client displays this address (if it's a dumb client like curl) or
> renders it (if it's a browser)
> 4. user authenticates
> 5. web client retries its first request and now it works
> Happiness ensues as the user gets its page, the admin is not yelled at, and
> corporate filtering is enforced.

That's quite broken if the request is an AJAX update or something
like that from an existing page on their browser, such as a page
they've kept open from before, or resumed from a saved session, or as
you say not authorized any more (presumably were earlier).

Transparent interception makes the page go wrong; the user is not
rendered happy in these cases.

-- Jamie
Received on Saturday, 7 April 2012 19:30:00 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:02 UTC