W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2008

Re: Security Requirements for HTTP, draft -00

From: Adrien de Croy <adrien@qbik.com>
Date: Tue, 29 Jan 2008 12:18:49 +1300
Message-ID: <479E62D9.6050006@qbik.com>
To: 'HTTP Working Group' <ietf-http-wg@w3.org>

my 2c

Paul Leach wrote:
> Here are some comments:
> WRT: "2.1.  Forms And Cookies
>    Almost all HTTP authentication is accomplished through HTML forms,
>    with session keys stored in cookies."
I think it does little good to dwell on whether one method of credential
verification is any more prevalent than any other.  In my experience,
all forms are very common.  Session-based auth such as NTLM is very
common for proxy auth, or auth to a corporate intranet webserver.  Forms
+ cookies is prevalent on public webservers.

None of these scenarios can safely be ignored.

> WRT: "2.2.2.  Digest Authentication
>         ...
>    Additionally, implementation experience has shown that
>    the message integrity mode is impractical because it requires servers
>    to analyze the full request before determining whether the client
>    knows the shared secret."
> Could you elaborate? The purpose of integrity protection isn't simply to determine if the client knows the shared secret, it is to insure that no MITM can modify the integrity protected data. This intrinsically requires that all integrity protected data be examined. Hence, the above statement seems to really amount to the claim that integrity protection is too expensive to be practical. However, it isn't any more expensive than TLS, and TLS is used pretty widely.
It's quite different to TLS.  With TLS, you decrypt blocks as they
arrive, and can pass this decrypted data to upstream processes (e.g.
CGI). The raw data is basically available as it is received. With full
message integrity, you need to buffer the entire message (which may not
end, e.g streaming media over HTTP) before you can verify the signature
to ensure it hasn't been tampered with.  Sure, both these may have a
similar crypto computational cost, but the buffering and memory
management cost is wildly different for large content bodies, and in
some cases it's not practical with current methods.

Overall, I think it's quite an interesting discussion document, I would
be interested to hear how people perceive the protocol itself moving
longer term.  Some of the issues HTTP has that other protocols (e.g.
SMTP) don't wrt authentication (and other things) are related to its
fundamental design/structure and how it has evolved.  HTTP was initially
designed to connect, make request, get result and disconnect.  This
doesn't have room in it for a challenge response auth scheme until you
move to persistent connections.  Still however, there is little
per-connection protocol overhead dedicated to setup, except perhaps TLS
certificates etc.  Auth is lumped in with requests rather than being
dealt with by itself.

Some issues may point to a new overall protocol structure (i.e.
pre-negotiation of transfer of large entities in either direction), but
compatibility issues will exert pressure against change.  Is there a
long-term goal for the protocol?
Received on Monday, 28 January 2008 23:17:58 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:44 UTC