W3C home > Mailing lists > Public > public-webapps@w3.org > July to September 2009

Re: fyi: Strict Transport Security specification

From: Adam Barth <w3c@adambarth.com>
Date: Mon, 21 Sep 2009 08:51:48 -0700
Message-ID: <7789133a0909210851m3dca1b8fj4de838ac3bba47a8@mail.gmail.com>
To: Aryeh Gregor <Simetrical+w3c@gmail.com>
Cc: "=JeffH" <Jeff.Hodges@kingsmountain.com>, public-webapps@w3.org
On Mon, Sep 21, 2009 at 8:31 AM, Aryeh Gregor <Simetrical+w3c@gmail.com> wrote:
> Is it true that UAs MUST NOT note a server as a Known STS Server if
> the Strict-Transport-Security header was received over an unsecured
> request, or there were underlying secure transport errors?

That's correct.  We should probably add this requirement to the spec
explicitly if it's not already there.

> "Underlying secure transport error" could use definition -- I'd
> imagine it would include, among other things, an unrecognized root
> certificate, an expired or revoked certificate, etc., but perhaps it
> could be made clearer.  Or do other relevant specs already define this
> term?

You're correct about the meaning of the term.  Clarifying what we mean
is probably a good idea.  There's a question of how specifically we
want to tie this to TLS.  Jeff's probably a better person to address
your question than I am.

> If a secure transport error occurs when connecting to a Known STS
> Server, there needs to be *some* way for *some* users to ignore it --
> it might be necessary for the site's administrator to debug the
> problem, at least.  I don't think it's realistic to expect absolutely
> *no* way to disable STS.  But perhaps it would be sufficient to just
> force users to use wget or something in such a case; that will always
> be an option.

Folks debugging their own site have lots of tools to figure out what's
going on.  I'd probably recommend a tool like Fiddler2, which gives
much more technical details than a browser UI.  If an end user want to
get around the error, they can clear the STS state in much the same
way they can manage cookies.  The main consideration is not to have a
UI flow from the error page to the "ignore this error" action.

> A general concern with the prohibition on users overriding errors:
> this kind of feature (which is designed to break sites in some cases)
> can suffer from a "race to the bottom" situation where sites deploy it
> before it's ready, don't test adequately, and then break when browsers
> implement it.  Then the browsers are forced not to implement it so
> they don't break the sites.  I don't know what the best way to handle
> this possibility is, but maybe it's something to keep in mind.

This is a fair concern, but you could say the same thing about other
opt-in security measures.  We haven't seen a race to the bottom with
Secure or HTTPOnly cookies, nor have we seen one with X-Frame-Options
(although X-Frame-Options is relatively new).  You're right, though,
that we need buy-in from a sufficient number of browser vendors to
avoid this danger.

> With respect to self-signed certs: why don't you allow the header to
> optionally specify the signature of a certificate that be present
> somewhere in the certification chain?

There are lots of ways of extending the header to address more use
cases.  For version 1, I think we should focus on addressing the core
use case of a high-security site trying to protect its Secure cookies
from active network attacks.  That being said, I think we should
define the grammar of the header in such a way (i.e., tolerant of
additional tokens) that we can add more features in the future.

> Regarding the Issue in section 10: it seems to me that the long-term
> solution to this would be something like DNSSEC, right?

I agree that once DNSSEC is the deployed, it will be a good way to
deliver an STS policy.  There's actually a proposal floating around to
do exactly that, but I can't put my fingers on it at the moment.  In
any case, I don't think we want to wait for DNSSEC before deploying
this feature.

> Regarding "STS could be used to mount certain forms of DoS attacks,
> where attackers set fake STS headers on legitimate sites available
> only insecurely (e.g. social network service sites, wikis, etc.)": how
> is this possible if the STS header is only respected from legitimate
> secure connections?  The attacker would have to forge a legitimate
> certificate for the domain in question for this to work, no?

Imagine a deployment scenario like https://www.stanford.edu/ where
students can host PHP files in their user directories.

> Overall, though, if this gets implemented and deployed by major
> commerce sites, it will make me feel a lot safer using HTTPS over an
> untrusted connection.

Thanks!

Adam
Received on Monday, 21 September 2009 15:52:56 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:33 GMT