Re: Content security model

On 26.07.2012 04:59, Phillip Hallam-Baker wrote:
> I have been thinking a lot about Web Services and how their security
> needs should be met in HTTP/2.0. I came up with some thoughts that
> might provoke discussion:
>
> 0) The killer application for HTTP/2.0 (if any) is going to be Web
> Services. Anyone supporting a Web server is going to have to support
> /1.1 for decades to come. A new Web Service can stipulate HTTP/2.0
> transport as a requirement.
>

We don't *need* any "killer app" unless we break the protocol semantics 
or mandate features which require a lot of administrative intervention 
to deploy 2.0. Seamless gateway for 2.0<->1.1 is a requirement - 
precisely to avoid this type of change.

The existing players who have already fronted up and expressed interest 
in upgrading should be all it takes to get the ball rolling.

> 1) TLS security is good for what it does, but it is still transport
> security and will therefore be hop-by-hop as far as intermediaries 
> are
> concerned. This gives rise to the need for security features in HTTP
> that are orthogonal to TLS and can survive passage through an
> intermediary.

+1000. Well put.

>
> 2) Multiplexing changes the nature of Web Services security and 
> allows
> for a significant simplification. Under HTTP/1.1 a Web Service that
> takes a message from A and forwards it to B has to modify the content
> if it is going to provide additional information in the transaction.
> This constraint no longer applies under multiplexing.
>

This does not follow. A web service which is designed with transport 
details embeded in the content is going to face content modification 
trouble no matter what we do. Unless you are considering the HTTP 
headers "content" somehow.
  The cases of Location, Content-Location and services trying to do 
NAT-like things with URL domain names is an foul smell unrelated to 
multiplexing. I can see the flow controls needed for multiplexing 
leading to better efficiency on these systems, but not removing their 
behaviour entirely.


> 3) HTTP security controls should only secure content. Signing headers
> is not only difficult, it is often counterproductive. If a Web 
> service
> depends on information in a header there is probably something wrong.
>
>
> This last one might seem a little controversial, after all, doesn't
> DKIM have a mechanism for header signing? DKIM does have a header
> signing mechanism but the headers being signed should probably have
> been content in the first place. The subject line, From, To, these 
> are
> all message content. They are part of the headers because content and
> routing are conflated in SMTP email.
>
> HTTP does have a similar conflation, but nowhere near as severe.
> Content is mostly confined to the body and Routing is strictly
> confined to the Head. The parts that cross the line are
> Content-Encoding and Content-Type. Both of which are ignored in a Web
> Services context almost all the time.  Yes, a Web Service could
> support multiple character encodings but I cannot see any case where 
> I
> would want the service to use Content-Encoding to make the choice.


The 1.1 drafts were onto a good thing by separating and identifying 
what headers are "content" meta data and which are "transport" meta 
data.
  If we enshrine the difference in HTTP/2 (different frame sections, 
whatever) we leave ourselves the option of signing the content headers 
against manipulation, while leaving the transport group open for 
intermediary routing etc.

We took this for a theoretical test-drive in the network-friendly-00 
framing design. The headers which are repeated per-request are usually 
the content headers, and the transport headers are aggregated across 
requests into the transport or common frames. Embedding message signing 
into this framing system would mean; sending the non-signed headers in a 
common frame ahead of the signed request, then signing the followup 
request-frame and entity-frames for the request flow frames. Encrypted 
or signed responses can easily operate the same in their flow.
   In theory the trade-off is a small drop in efficiency as common 
frames change from being strictly common, to containing un-signed unique 
things and constantly needing updates. This small drop would seem to be 
a good tradeoff against blanketing an entire connection with TLS 
overheads.

>
>>From these I draw the following conclusions:
>
> * HTTP 2.0 should draw a distinction between routing headers and
> content meta-data
> * HTTP encryption and authentication are necessary independent of TLS 
> support

+1000.

Amos

Received on Wednesday, 25 July 2012 23:13:09 UTC