Re: multiplexing -- don't do it

Once again - the people arguing against encryption are the people that want
to exploit the user's data transmission stream for their own personal gain.

If we want the Internet to be protect users - encrypt it.

If we want the Internet to be an enabler to vendors that want to
change/alter/slow down/trick users into seeing their content, buying their
products, etc, then don't.

It's a simple choice:  users vs interceptors.

Mike


On Fri, Apr 6, 2012 at 3:12 PM, Nicolas Mailhot <nicolas.mailhot@laposte.net
> wrote:

>
> Le Ven 6 avril 2012 16:30, William Chan (陈智昌) a écrit :
> > On Fri, Apr 6, 2012 at 4:00 PM, Nicolas Mailhot <
> nicolas.mailhot@laposte.net
>
> >> And yet none of those vendors though twice before disabling https
> >> redirects,
> >> even though it was known they were widely used by proxies and captive
> >> portals,
> >> and no replacement was proposed, and it subjected and still subjects a
> >> non-trivial number of proxy users to hangs, data corruption, and
> >> other errors.
> >>
> >
> > I don't think this is relevant to the http/2.0 discussion. I'm happy to
> > have this discussion, but perhaps you should start another thread.
>
> It is very relevant to this discussion. Because of this browser decision,
> https is currently broken in authenticating gateway environments. That is
> one
> of the pain points http/2 should fix.
>
> > I don't know what you mean by disabling https redirects...
>
> Once upon a time, when a user navigated on an https web site, an
> intermediary
> hop could send a redirection to its own authentication gateway to autorise
> the
> user. And then browsers decided such redirects where evil (they could also
> be
> used for MITM) and started to ignore them without providing any other
> mechanism to handle gateway auth.
>
> > I think you mean
> > clients do what they are supposed to do with https URLs - verify the
> > server's certificate, which generally prevents these captive portals from
> > MITM'ing the connection. I understand this causes problems for captive
> > portal vendors, but I don't think it's valid to complain that clients are
> > correctly implementing https.
>
> That's very well written, except browsers deliberately broke a
> widely-deployed
> use case, and didn't propose anything to replace it (and no one can
> replace it
> without browser cooperation).
>
> You won't get any sympathy from all the proxy admins which have been
> yelled at
> those past years because some user got blocked while browsing an https web
> site and could not understand the proxy had no way to get the f* browser to
> display the normal proxy auth page (the biggest price goes to Firefox that
> in
> addition to blocking the proxy redirects complains to the user the proxy is
> blocking the connection and it does not know why)
>
> > I think captive portal vendors should come up
> > with a real proposal instead of relying on hacks.
>
> The proposal has been made many times in browser bug trackers. It's always
> basically:
> 1. web client requests a web page
> 2. gateway responds web client is not authorized (or authorized anymore) to
> access this url, and specifies the address of its authentication page
> 3. web client displays this address (if it's a dumb client like curl) or
> renders it (if it's a browser)
> 4. user authenticates
> 5. web client retries its first request and now it works
>
> Happiness ensues as the user gets its page, the admin is not yelled at, and
> corporate filtering is enforced.
>
> The only thing changing between proposals are the error codes or field
> names,
> the basic design is always the same.
>
> And no various "I'm on the internet" test pages do not work since they do
> not
> map to network topology nor to the filtering rules enforced by the
> gateway. In
> fact such a page when implemented has a high chance to get summarily
> blacklisted since browsers like to hammer it for no good reason all the
> time.
>
> --
> Nicolas Mailhot
>
>
>

Received on Friday, 6 April 2012 15:35:15 UTC