Response to HTTP2 expresions of interest

This is a personal expression of interest in HTTP 2.0 which may or may
not be the opinion of my employer.

Although my recent interest has been focused on security, I have been
involved in development of HTTP since 1992. Some observations.

1) Deployability

Defining a new capability is not the hard part. We have had great
schemes for multiplexing and security for decades. The devil is in the
deployment. I rather liked Henryk's scheme and Rohit's and really
liked a few of mine.

Any new capability has to find a niche that can attract some self
supporting early adopters. Deploying new capabilities in browsers is
hard because everyone has to think about the legacy. Web services is a
much easier area to get traction as there are new web services being
developed every week and the people deploying them can and will choose
the platforms that best meet their needs.

So I would strongly urge people to think about solving at least some
of the problems of the Web Services world as that is the best bet we
have to find a killer app for HTTP 2.0. If the code is used for web
services it will get pulled along into the browser chain much faster.


2) Code budget

Before going anywhere, ask how many bytes the browsers might commit to
these proposals. I would be very surprised if they would allow more
than 100Kb for the whole of HTTP/2.0.

Ergo anyone proposing integrating their favorite API with a 500Kb dll
is pushing a total non-starter. In fact I think that any scheme that
cannot be implemented in fifty or so pages of self-contained code is a
non-starter.


3) Boat Anchors

This is not an opportunity for people to get the HTTP world to pull
through deployment of their standard that they have been trying to get
traction with for 20 years. This is called getting someone else to
carry your boat anchor. I have seen the strategy tried time and again
by people who didn't think through the deployment issue.

GSS-API was considered in 1995 and found to be too big, slow and
difficult to understand. I don't think it should be reconsidered
unless it has been drastically reduced in size since. I believe the
opposite to be the case.


4) Frameworks

HTTP is a framework and frameworks should not be built on other frameworks.

Making Kerberos or NTLM or SAML or whatever work better with HTTP is
fine provided that the framework specific code is no more than a
header or two and preferably even less.

Making support for Kerberos, SAML, or whatever boat anchor you want to
name a requirement of HTTP 2.0 is a non starter.

And yes, I am talking about GSS-API again.


5) Complexity

Complexity is the enemy of security. TLS and GSSAPI and IPSEC all have
far too many moving parts to be 100% confident about the security.

5a) The TLS-HTTP gap

Now as far as HTTP is concerned, headers have security implications
and so HTTP is not going to be acceptably secure without either
transport layer or packet layer security. Since IPSEC never got its
key exchange act together that leaves TLS as the only game in town for
that.

TLS itself is solid but when you try to work out the interaction
between HTTP and TLS, well it just isn't possible to be 100% confident
that nothing falls through the cracks.

My conclusion is that going forward we should plan on the basis of
expecting cryptographic security at the HTTP layer AND the Transport
layer and expect them to provide different controls.

In 1995 even symmetric cryptography was expensive and the idea of
using more than one layer when you didn't need to was quite an
overhead. Modern machines are more than capable of supporting belt and
braces security.

5b) The HTTP-HTML gap

Another place that security breaks down is in the interface between
HTTP and HTML. In particular running passwords en-clair is really a
HTML issue rather than a HTTP issue.


6) Authentication

I have been involved in many authentication schemes over the years. I
don't think any of them is 100% right. But I would urge HTTP 2.0 to
NOT use SAML or OAUTH authentication as the basis for HTTP/2.0

The reason for this is that the main design constrain in SAML, OAUTH,
OPenID or any other scheme you name was how to work around the legacy
browser and server infrastructure. HTTP 2.0 can help us remove.

I think one of the reasons for confusion is that there are actually
three distinct processes that are referred to as 'authentication'.
Another is that the authentication schemes on offer tended to punt on
the idea of a federated identifier space because various commercial
interests backing the authentication schemes to date all rather
fancied themselves as being in charge of it.

This is the Internet so we use DNS as the federated naming repository.
Holders of DNS names will then issue accounts under them. Ergo an
Internet account name will be normalized as username@domain.tld end of
story (ok give it a urn, acct:username@domain.tld). If people want to
do XRI or OSI or whatever, that is outside core HTTP. The concept of
vanity crypto is well understood in the security area. Vanity
namespaces deserve the same derision.

In this particular scheme a local, non federated name is simply an
account with an implicit domain component of localhost, .local or
whatever. If we want to integrate Windows domain accounts into this
scheme we would use either the DNS name of the domain controller as
the account name or reserve _windows.local or the like for the
purpose.


The processes I see are:

1) Validation: Is the holder of account name 'x' the authoritative
holder of that account name

    e.g. to Validate 'alice@example.com' we might:
        * Send an email callback challenge
        * Check a service such as OAUTH or OpenID supported by example.com
        * Do a PKI protocol based on a CA issued certificate issued to
 'alice@example.com'
        * Do nothing at all
        * other TBS

So in this scheme the only identifiers are implicitly federated as
they all contain domain names. We address the real world corner cases
by accepting that there are degrees of validation ranging from strong
(a statement coming from the authoritative name holder) to
non-existent (don't check the name at all) and by allowing for the use
of the parts of the federated DNS namespace that do not guarantee a
unique authoritative nameholder.


2) Initial-Authentication: A user authenticates themselves to a
destination on a particular device.

This is what is generally considered as 'authentication' today. To log
in to battle.net I provide my username and password. Then I can go
beat up daemons.

Now one of the reasons passwords are so hard to get rid of is that
they are a very simple and convenient means of enabling account
portability from one device to another. A separate hardware token like
the OTP token battle.net sells or a USB key is nowhere near as
convenient as just a password. And as for the idea of moving
certificates from one machine to another...

This is currently done at the HTML layer and not in HTTP.

If we are going to replace passwords we have to provide a scheme that
is at least as convenient as passwords. I am not getting a battle.net
token and I was one of the people who wrote OATH. The only approach
that is as convenient for the user is to leverage some cloud based
service.


3) Re-Authentication: Having performed an initial authentication, a
stored credential is used to authenticate additional transactions.

This is a part that is currently performed by HTTP albeit using
cookies which provide a weak cryptographic binding to the channel and
give rise to all sorts of privacy horrors.

A cryptographic scheme would mean the client a per-session ticket (50
bytes), secret key and binding parameters and the client and server
performing some form of simple mutual authentication on each
transaction. This is actually quite easy and fast if you use symmetric
key.


If you look at the problems involved you will see that there is a lot
of real-world variation in approaches to the first, rather less on the
second and practically none on the third.

We can define a very simple mechanism for re-authentication in HTTP
2/0 that can then be used by any of the authentication frameworks that
care to use it. That approach can even be made to support features
like global sign-off in a framework neutral way.

There may be value to addressing at least some parts of initial
authentication in HTTP 2.0 but this needs to be addressed with a
careful eye on the code budget. I will return to that in another post.
There are quite a few moving parts. There does need to be a bridge
from the initial authentication to the re-authentication and none of
the legacy mechanisms is going to be satisfactory as they were all
constrained by HTTP 1.1. the justification for redoing this in HTTP
2.0 is that we have the opportunity to put the crypto where it will do
us the most good.

The first part, validation should be left to site policy and be out of scope.

-- 
Website: http://hallambaker.com/

Received on Friday, 13 July 2012 17:10:17 UTC