Re: multiplexing -- don't do it

Le Sam 7 avril 2012 21:29, Jamie Lokier a écrit :
> Nicolas Mailhot wrote:

>> The proposal has been made many times in browser bug trackers. It's always
>> basically:
>> 1. web client requests a web page
>> 2. gateway responds web client is not authorized (or authorized anymore) to
>> access this url, and specifies the address of its authentication page
>> 3. web client displays this address (if it's a dumb client like curl) or
>> renders it (if it's a browser)
>> 4. user authenticates
>> 5. web client retries its first request and now it works
>>
>> Happiness ensues as the user gets its page, the admin is not yelled at, and
>> corporate filtering is enforced.
>
> That's quite broken if the request is an AJAX update or something
> like that from an existing page on their browser, such as a page
> they've kept open from before, or resumed from a saved session, or as
> you say not authorized any more (presumably were earlier).

No that's not quite broken that's the only way it can work.

Please admit that on restricted networks access to some external sites
requires authorization. That this authorization won't be eternal for basic
security reasons. That due to hibernation/resume/client mobility/plain
equipment maintenance this authorization will need to be acquired or
reacquired at any point in the web client browsing. That means yes you do need
to handle ajax updates, mid-of-tls-interruptions, and all the difficult use
cases. The user is not going to oblige you by restricting himself to the
simple use cases when auth needs reacquiring

Because if web clients don't handle those, the gateway will always have the
option to block the access. And make no mistake it will and does exercise it.

The refusal to handle those cases so far has resulted in :
1. broken hotel/conference captive portals
2. widespread availability of TLS interception in proxy manufacturer catalogs
3. corporations getting stuck on old insecure browser versions because the
newer ones 'security' hardening broke their proxies
4. corporations hand-patching newer browser releases to restore the old
'redirection on https works' behaviour

And in all those cases, who were the first to suffer? The users. If you'd poll
them the vast majority would care *nothing* about the https cleanliness model,
privacy, etc. Not as long that means they have a broken browsing experience
everyday long. Beca

Lots have been written here about the great anger of corporate overlords
reading their employees messages, and the need to harden the website-to-user
channel. But you know what?

1. corporate overlords by an large do not care bout their employee's browsing
as long as they are productive
2. if a rotten corporate apple had delusions about spying on its people, it
would use an external team not its own network people (specialized criminal
shops are so much more discreet and effective than normal network people)
3. in all the cases I've seen so far where an employee was punished for
something he did on the net, the leak came from the web site, not the company
network systems (why set up a complex intercepting system when users – even
including supposedly sophisticated people like lulz – brag openly of their
misdemeanors on public social sites)

-- 
Nicolas Mailhot

Received on Sunday, 8 April 2012 11:05:30 UTC