W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Do we kill the "Host:" header in HTTP/2 ?

From: Amos Jeffries <squid3@treenet.co.nz>
Date: Sat, 23 Feb 2013 23:52:08 +1300
Message-ID: <51289F58.1060008@treenet.co.nz>
To: ietf-http-wg@w3.org
On 23/02/2013 7:20 a.m., Nicolas Mailhot wrote:
> Nico Williams <nico@...> writes:
>
>> On Wed, Jan 30, 2013 at 3:31 PM, Adrien W. de Croy <adrien@...> wrote:
>>> from a proxy POV, it's very useful, nay vital that we can tell the
>>> difference between a request that a client thinks it is sending to a proxy,
>>> vs a request the client thinks it is sending to a server.
>>>
>>> [...]
>>> In fact for authentication, I would extend it to allow for the definition of
>>> the target of the auth.  If you have a request going through a chain, and
>>> several links require auth from the client, there's currently no way to do
>>> it safely.
>> There are separate headers for authentication to proxies.
> The current separate headers for authentication to proxies are a miserable
> failure which is impossible to use in all but the simplest cases:
>
> 1. they only work with a single intermediary. Want to chain networks with
> authentification gateways?  No can do. Someone forgot that the Internet was
> built by interconnecting distinct networks, and that those networks are not
> freely accessible except for entertainment junkies at home.

That someone seems to be you...

Case 1) End-user logging into an origin HTTP server.
    --> www-authenticate header

Case 2) End user logging into their ISP proxy
   -> proxy-authenticate header from end-user to ISP proxy.

Case 3) ISP proxy logging into their upstream ISP proxy
   -> proxy-authenticate header from ISP proxy to upstream.

No I am not forgetting interception proxies. They are forbidden from 
dropping or performing authentication due to the credentials they 
receive *always* being destined to some upstream location.

*Never* does end-user need to explicitly log into the Tier-1 proxy farm 
directly, nor does middleware of any type need to play with the origin 
server account credentials. Proxy-authenticate is hop-by-hop header and 
www-authenticate is end-to-end for these reasons and separation is both 
required for correct operation, security and works *perfectly well* for 
all of these purposes.

If you were to propose a replacement architecture where these 
credentials were shareable to the security WG they would turn around and 
require you to re-implement this separation of scope in some form or 
another. Look at what happend to the OAuth 2.0 efforts to unify user 
credentials into a single item accepted around the world in all OAuth 
devices. We now have a multitude of alternative token types (Bearer, 
MAC, ...) split over several relay locations (headers, URLs, many entity 
formats, Cookies) multiplied by the separation of credentials from 
services, controllers, assignation verifiers. End result: a massive mess 
of confusion about interoperability, complexity and which portion of the 
system is using up-to-date token values when two parts contradict. 
Compare the thousands of lines of OAuth text needed to describe their 
"simple" system to the above HTTP architecture description in three cases.



> 2. they are insecure: you can't send them securely to the intermediary without
> piggy-backing on the main transport crypto. Want to authenticate to the proxy to
> access an http site ? Best hope every link between the user and the intermediary
> is secure, because everyone will see your credentials.

If your public credential Proxy-Auth token is subject to decryption or 
replay there are worse problems you face than others being aware of what 
your token looks like. Use a better security token type is the 
counter-argument for this.

>   Want to access an https
> web site? Need MITM by the intermediary to be able to read your damn proxy
> credentials

This is what the 'S' in HTTPS is *for* and *why* it is on the end. 
Everybody on the link could see what URL you are requesting as well if 
the HTTP portion was unencrypted. The URL is just as much sensitive data 
as the user credentials in the traffic HTTPS was intended for.
If the HTTP transport/transfer details were not security critical we 
would be talking about shttp:// . Which it appears nobody in the browser 
arena is willing to implement despite years of lobbying by middleware 
authors.

Oh and the CONNECT wrapper headers is where the Proxy-Auth credentials 
are supposed to be sent on proxied HTTPS traffic.


> 3. since web clients do not really have a way to send them to the right network
> node at the right time many just spread-fire proxy auth on every request in the
> hope they'll get picked up whenever necessary. Including when they are not using
> a proxified link (so it works all the time, at the expense of giving proxy
> secrets to every random web site).

You mean clients are not aware of which service they are talking to at 
the other end of any TCP connection? that is plain wrong.

Notice how I wrote "service" above instead of "node". One node can be 
multiple services, or one service on multiple nodes.

There is no need for HTTP proxy credentials to be sent to a TCP router 
node for example. If login to routers were required to use them TCP 
would contain an authentication model.

> HTTP1 proxy auth is too broken to be rescued. It should be taken in the backyard
> and quietly shot in the head. I do hope HTTP2 will make things better.
>
> (HTTP 511 is a bit better, but relies on js support in the web clients)
>
> -- 
> Nicolas Mailhot
>
>
>
Received on Saturday, 23 February 2013 10:52:36 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 23 February 2013 10:52:40 GMT