Re: Content security model

On Thu, Jul 26, 2012 at 1:11 AM, Benjamin Carlyle <fuzzybsc@gmail.com> wrote:
>
> On Jul 26, 2012 3:03 AM, "Phillip Hallam-Baker" <hallam@gmail.com> wrote:
>> I have been thinking a lot about Web Services and how their security
>> needs should be met in HTTP/2.0. I came up with some thoughts that
>> might provoke discussion:
>> 0) The killer application for HTTP/2.0 (if any) is going to be Web
>> Services. Anyone supporting a Web server is going to have to support
>> /1.1 for decades to come. A new Web Service can stipulate HTTP/2.0
>> transport as a requirement.
>
> Here's a crazy thought. Why don't we put the knives away for a moment and
> consider whether the new version of http could line up a little closer with
> soap, if not necessarily in syntax then perhaps at least a little in its
> model? Currently the structure of a http message is approximately a method
> and uri or status code, plus key/value store of headers that each have
> additional per header syntax and data model, plus an optional body.
> Ignoring the whole of soap for a moment if that were merely translatable to
> an xml info set then you could pick up xml canonicalization, signatures, and
> encryption for relatively little standards work.

Nothing involving XML Signature is remotely simple, and I was one of
the people who worked on it. I was also an editor of WS-Security.

What I am trying to do with this proposal is to provide the capability
that WS-Security should have provided if the XML nonsense had not got
in the way.


> It might even be possible
> to consider unifying soap and http at this protocol level so we no longer
> need to wrap one in the other in order to cross between systems.

That is the idea.

What makes XML Signature hard is canonicalization combined with the
fact that the signature has to sit in the middle of the data it signs,
thus bringing in transforms.

Put the signature in a HTTP header and pretty much everything that is
hard about XML Signature goes away.


> I know the soap and http cultures have some distance between them, and I
> know that there are technical difficulties as well as dangers of polluting
> nice pure architectural style, but it seems to me that to have a run up
> towards a new major version as important as http with a rollout that will
> take decades to complete that making peace between the political and
> technical needs of a few warring factions sold not be too big an issue to
> consider biting off.
>
> Maybe I'm crazy and/or naive :)

We could look at DIME and my DIME Security proposal :-)


>> 3) HTTP security controls should only secure content. Signing headers
>> is not only difficult, it is often counterproductive. If a Web service
>> depends on information in a header there is probably something wrong.
>
> I don't see a point in securing a request message unless it's recipient can
> authenticate uri, headers and body as being unchanged from when they were
> authentically written on behalf of the user with possible additional
> annotations from intermediaries. I don't see the point in securing content
> returned from a server unless you are also securing cache control
> information that tells me it is still valid and is not done kind of replay
> attack, and again various header information is relevant to want to secure.

Web Services are layered on top of HTTP just like HTTP is layered on
TLS to produce HTTPS. TLS does not secure the TCP headers, so why
should web services require the HTTP routing layer to be secure?

IPSEC tries to secure the TCP header and it turns out to be a major
protocol blunder. They secured the source and destination IP addresses
and so the protocol didn't work through NAT which at this point is the
normal situation for an Internet connection. The designers did that
deliberately to stop the spread of NAT. I remember sitting in the
meeting listening to the rants against the evils of NAT thinking to
myself 'Time Warner wants $10/month per IP address, sod that for a
lark, I have 15 machines and in any case they will only allow 4'.

Ideological commitments such as XML Cannonicalization and NAT-busting
in the face of empirical evidence that they don't work are the reason
that we end up having to do stuff over and over again. Which in turn
is the reason that it is important to make sure that the ideological
commitment that ruined the first spec does not get carried over to the
next. We don't really need JSON encoding, we could do perfectly well
by choosing a rational subset of XML and dropping all the SGML
nonsense that has no place in a web service type transport. But the
political difficulty of rationalizing XML is harder than the political
difficulty of just rebuilding the spec from scratch somewhere that the
ideology can't be re-injected.


The reason that there is an issue is that HTTP has the operation and
the content metadata sprinkled into the routing headers. This gives us
three parts of the message we might need to sign.

* Method + URI
* Content Metadata headers
* The Body

Signing the URI is in turn a little tricky as in a Web service it is
quite often the case that some parts of the URI are actually routing
and other parts are parameters to the Web service.

For example a Web service "Foo" with command "Command" and parameters
"p1=v1.." might be specified as:

http://www.example.com/Foo/Command?p1=v1&p2=v2
http://www.example.com/Foo?cmd=Command&p1=v1&p2=v2

Now let us introduce an intermediary that reroutes the service to a
back end processor that supports versions 1 through 5:

http://foo.example.com/v1/Command?p1=v1&p2=v2
http://foo.example.com/v1?cmd=Command&p1=v1&p2=v2

This is all very standard stuff, but how does the signer know which
part of the uri is routing and which is command? In the first example
there is a part of the URI stem that is security critical, in the
second only the parameters are security critical.


There is always a tension between how much legacy you want to support
and making the spec clean and simple. My personal preference would be
to tell people that if they want to use the integrity features then
they have to design their Web service around the particular
constraints it imposes. But that is an ideological position I probably
can't win so I would have to fallback to something like the following

* Each integrity header contains integrity check(s) for exactly one
range of data.
* The header specifies which parts of the message it covers, these may
be the body, the content meta-data headers or the method and (part) of
the canonicalized URI.

So if we have a message

GET /foo/part HTTP/1.1
Host: Foobar
Content-Type: application/html

We might specify an integrity check on just the body, just the
metadata or both as follows:

Integrity: body ; ...
Integrity: meta ; ...
Integrity: body ; meta ; ...

If we want to sign the method line we might sign the whole thing
(including the Host part) or just the method and '/part' as follows:

Integrity: method;
Integrity: method=5;

The count in method is the part of the suffix that is in the scope of
the signature.


While playing with this scheme last night, I found that it is actually
rather nice to have multiple integrity checks, even if you are using
'fast' symmetric key, like I often do.

Signing content is a pain for a start, you have to cache the whole
message before you can start work. That is not a major hassle when you
are dealing with messages of a few hundred bytes like I usually do.
But one of the projects I want to work on now that I have a working
protocol compiler is to allow me to upload photos from my digital
camera to my server. Since I shoot RAW each of my pictures is 10Mb.
And I want a D800 which would make it even worse.

This is the type of application where REST style makes a lot of sense.
Put the commands in the method line and you can use the whole of the
body for the content being operated on. Separate integrity checks for
each means that a bogus request can be rejected before you have
accepted 10Mb of data.

[And no, I don't think a stateless protocol is the way to move data
objects that size. I am going to be implementing restart and such or
the new scheme is going to be as horrible to use in practice as the
EyeFi is. I also shoot video and those come in 2Gb chunks]

>> >From these I draw the following conclusions:
>> * HTTP 2.0 should draw a distinction between routing headers and
>> content meta-data
>> * HTTP encryption and authentication are necessary independent of TLS
>> support
>
> It seems to me we shouldn't need two solutions to any given problem.

Transport and Message level security are two different beasts. You
cannot get non-repudiation at the transport layer for a start. Message
level security requires the involvement of the application.

> Soap's
> tenancy to use custom methods, custom headers, and custom media types (ie
> custom schemas) are not things we want or need on the Web, but I find it a
> shame to be heading in different directions on basic important things like
> signatures and encryption of messages.

I think some of the SOAP stuff was designed for folk selling SOAP
platforms and consulting services necessary to make them work. Only it
didn't really take off because the whole edifice was too complex for
most people to want to bother with and so we have a fracture in the
Web Services world between the REST camp and the SOAP camp.

What I want to do is to make the division moot by arming the JSON/REST
camp with the same tools that the SOAP camp have always relied on :-)

> In the end http and soap at the
> message structure level have but a hairl's difference between them. It seems
> naively to me that I'd http were to take a step towards soap at that basic
> message structure level that other standardisation problems might become
> simpler to solve, and that the shared solution could facilitate an
> unwrapping of soap messages to be able to be sent potentially as first class
> citizens on the web... Although problems with WS-Addressing might be the
> first hurdle to cross.

Well SOAP really has its origins in COM. To understand SOAP
architecture you need to think about the problem 'how do I expose my
COM service as a Web Service'. That is why it needs all that mechanism
within mechanism and that is perfectly fine if your problem is how to
expose an existing COM service as a Web Service.

SOAP does not end up delivering value when you are designing a Web
Service protocol from scratch.

-- 
Website: http://hallambaker.com/

Received on Thursday, 26 July 2012 15:19:07 UTC