W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: multiplexing -- don't do it

From: Jamie Lokier <jamie@shareable.org>
Date: Mon, 9 Apr 2012 16:12:10 +0100
To: Nicolas Mailhot <nicolas.mailhot@laposte.net>
Cc: (wrong string) ™ˆ™˜Œ)" <willchan@chromium.org>, ietf-http-wg@w3.org
Message-ID: <20120409151210.GC3240@jl-vm1.vm.bytemark.co.uk>
Nicolas Mailhot wrote:
> 
> Le Sam 7 avril 2012 21:29, Jamie Lokier a crit :
> > Nicolas Mailhot wrote:
> 
> >> The proposal has been made many times in browser bug trackers. It's always
> >> basically:
> >> 1. web client requests a web page
> >> 2. gateway responds web client is not authorized (or authorized anymore) to
> >> access this url, and specifies the address of its authentication page
> >> 3. web client displays this address (if it's a dumb client like curl) or
> >> renders it (if it's a browser)
> >> 4. user authenticates
> >> 5. web client retries its first request and now it works
> >>
> >> Happiness ensues as the user gets its page, the admin is not yelled at, and
> >> corporate filtering is enforced.
> >
> > That's quite broken if the request is an AJAX update or something
> > like that from an existing page on their browser, such as a page
> > they've kept open from before, or resumed from a saved session, or as
> > you say not authorized any more (presumably were earlier).
> 
> No that's not quite broken that's the only way it can work.
> 
> Please admit that on restricted networks access to some external sites
> requires authorization. That this authorization won't be eternal for basic
> security reasons. That due to hibernation/resume/client mobility/plain
> equipment maintenance this authorization will need to be acquired or
> reacquired at any point in the web client browsing.

I'm not arguing against the authorization requirement.

I'm only saying that your "happiness ensues" conclusion is false, as
you did say the proposal is always basically the same, and in my
personal experience as an end user, it's horrible already.

> That means yes you do need to handle ajax updates,
> mid-of-tls-interruptions, and all the difficult use cases. The user
> is not going to oblige you by restricting himself to the simple use
> cases when auth needs reacquiring Because if web clients don't
> handle those, the gateway will always have the option to block the
> access. And make no mistake it will and does exercise it.

Right.  But the web client _can't_ handle those cases, because the
gateway is injecting a fake redirect, the gateway doesn't know what
it's interrupting, and the result is just like a normal page, not an
error page or special signal to the browser asking for authorization.

> The refusal to handle those cases so far has resulted in :
> 1. broken hotel/conference captive portals
> 2. widespread availability of TLS interception in proxy manufacturer catalogs
> 3. corporations getting stuck on old insecure browser versions because the
> newer ones 'security' hardening broke their proxies
> 4. corporations hand-patching newer browser releases to restore the old
> 'redirection on https works' behaviour
> 
> And in all those cases, who were the first to suffer? The users. If you'd poll
> them the vast majority would care *nothing* about the https cleanliness model,
> privacy, etc. Not as long that means they have a broken browsing experience
> everyday long.

Here's what happens in the old style: I connect to
corporate/hotel/cafe network.  Then I dread starting Firefox because
my last 50 open tabs will start up and _all_ redirect to the portal's
Wifi login page.  I get 50 stupid login pages, and lose the original
state.

If I'm paying attention, I start Firefox _before_ connecting to the
network, wait for it to start in offline mode and load the 50 tabs
correctly, and then connect to the network.

But still, those pages which are using AJAX polling start doing random
things, as their Javascript gets something random it wasn't expecting.

These days I resort to running w3m (a Lynx-like text-only browser) to
go the proxy's login page first.  But that's increasingly broken too,
as some Wifi login pages have stopped being normal forms, and only
work in a fully fledged graphical browser with Javascript enabled, to
"simulate" form fieleds.  Don't ask me why.  All I know is the model
you are pushing is broken enough with HTTP.

So my objection to the classical approach to authorization by
redirecting everything has nothing to do with security, or even HTTPS,
and everything to do with the user experience.

What would work much is if the browser got a response meaning "you
will need to authorize before the original request can proceed - open
[URL] to present an authorization page", and do not consider the
original request to have completed yet.

Intercepting proxies could do that with HTTP or HTTPS or HTTP/2.0 if
there's a standard signal for it, *without* having to break the
security model or mislead any users.  It would be a nicer experience
for everyone.

-- Jamie
Received on Monday, 9 April 2012 15:12:36 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:59 GMT