- From: =JeffH <Jeff.Hodges@KingsMountain.com>
- Date: Tue, 08 Dec 2009 13:26:07 -0800
- To: W3C Web Security Interest Group <public-web-security@w3.org>
------- Forwarded Messages Date: Mon, 21 Sep 2009 11:31:03 -0400 From: Aryeh Gregor <Simetrical+w3c@gmail.com> To: "=JeffH" <Jeff.Hodges@kingsmountain.com> cc: public-webapps@w3.org Subject: Re: fyi: Strict Transport Security specification On Sat, Sep 19, 2009 at 7:59 PM, =3DJeffH <Jeff.Hodges@kingsmountain.com> w= rote: > Hi, > > We wish to bring the following draft specification to your attention.. > > =C2=A0 =C2=A0 Strict Transport Security (STS) > <http://lists.w3.org/Archives/Public/www-archive/2009Sep/att-0051/draft-h= odges- > strict-transport-sec-05.plain.html> (replying to WHATWG post since I wasn't subscribed here, sorry if this breaks threading) This sounds great. It will hopefully close a significant hole in HTTPS. I have some comments, but please bear in mind that I'm a web developer, and only having a basic working user-level knowledge of how SSL works (I haven't read the RFCs, etc.). Is it true that UAs MUST NOT note a server as a Known STS Server if the Strict-Transport-Security header was received over an unsecured request, or there were underlying secure transport errors? I don't see that stated explicitly, but it seems like it should be a requirement so that MITMs can't trick browsers into noting a site as an STS Server when it's only available unsecured. "Underlying secure transport error" could use definition -- I'd imagine it would include, among other things, an unrecognized root certificate, an expired or revoked certificate, etc., but perhaps it could be made clearer. Or do other relevant specs already define this term? If a secure transport error occurs when connecting to a Known STS Server, there needs to be *some* way for *some* users to ignore it -- it might be necessary for the site's administrator to debug the problem, at least. I don't think it's realistic to expect absolutely *no* way to disable STS. But perhaps it would be sufficient to just force users to use wget or something in such a case; that will always be an option. A general concern with the prohibition on users overriding errors: this kind of feature (which is designed to break sites in some cases) can suffer from a "race to the bottom" situation where sites deploy it before it's ready, don't test adequately, and then break when browsers implement it. Then the browsers are forced not to implement it so they don't break the sites. I don't know what the best way to handle this possibility is, but maybe it's something to keep in mind. With respect to self-signed certs: why don't you allow the header to optionally specify the signature of a certificate that be present somewhere in the certification chain? I might be using incorrect terminology here -- I mean that I could say "only accept the public key with SHA1 hash xxx or anything signed by it directly or indirectly". That would allow self-signed sites to work somewhat more securely than handing out the key manually to users -- they'd follow an SSH-style model of "warning on the first view, die horribly if the certificate changes unexpectedly". It would also mean that other sites could greatly narrow attack surface, since an attacker would have to forge a particular certificate instead of being able to compromise any trusted CA. At its tightest, this would allow the connection to fail unrecoverably if the private key changes at all, much like how the OpenSSH client works. Regarding the Issue in section 10: it seems to me that the long-term solution to this would be something like DNSSEC, right? If an STS requirement could be put in a secure DNS record, possibly along with the public key itself, then you would have no bootstrap problem at all. I don't see any other way to avoid the problem even in principle as long as a MITM could forge DNS. Regarding "STS could be used to mount certain forms of DoS attacks, where attackers set fake STS headers on legitimate sites available only insecurely (e.g. social network service sites, wikis, etc.)": how is this possible if the STS header is only respected from legitimate secure connections? The attacker would have to forge a legitimate certificate for the domain in question for this to work, no? In Design Decision Notes, "errornous" should be "erroneous". Overall, though, if this gets implemented and deployed by major commerce sites, it will make me feel a lot safer using HTTPS over an untrusted connection. ------- Message 2 Date: Mon, 21 Sep 2009 08:51:48 -0700 From: Adam Barth <w3c@adambarth.com> To: Aryeh Gregor <Simetrical+w3c@gmail.com> cc: "=JeffH" <Jeff.Hodges@kingsmountain.com>, public-webapps@w3.org Subject: Re: fyi: Strict Transport Security specification On Mon, Sep 21, 2009 at 8:31 AM, Aryeh Gregor <Simetrical+w3c@gmail.com> wr= ote: > Is it true that UAs MUST NOT note a server as a Known STS Server if > the Strict-Transport-Security header was received over an unsecured > request, or there were underlying secure transport errors? That's correct. We should probably add this requirement to the spec explicitly if it's not already there. > "Underlying secure transport error" could use definition -- I'd > imagine it would include, among other things, an unrecognized root > certificate, an expired or revoked certificate, etc., but perhaps it > could be made clearer. =A0Or do other relevant specs already define this > term? You're correct about the meaning of the term. Clarifying what we mean is probably a good idea. There's a question of how specifically we want to tie this to TLS. Jeff's probably a better person to address your question than I am. > If a secure transport error occurs when connecting to a Known STS > Server, there needs to be *some* way for *some* users to ignore it -- > it might be necessary for the site's administrator to debug the > problem, at least. =A0I don't think it's realistic to expect absolutely > *no* way to disable STS. =A0But perhaps it would be sufficient to just > force users to use wget or something in such a case; that will always > be an option. Folks debugging their own site have lots of tools to figure out what's going on. I'd probably recommend a tool like Fiddler2, which gives much more technical details than a browser UI. If an end user want to get around the error, they can clear the STS state in much the same way they can manage cookies. The main consideration is not to have a UI flow from the error page to the "ignore this error" action. > A general concern with the prohibition on users overriding errors: > this kind of feature (which is designed to break sites in some cases) > can suffer from a "race to the bottom" situation where sites deploy it > before it's ready, don't test adequately, and then break when browsers > implement it. =A0Then the browsers are forced not to implement it so > they don't break the sites. =A0I don't know what the best way to handle > this possibility is, but maybe it's something to keep in mind. This is a fair concern, but you could say the same thing about other opt-in security measures. We haven't seen a race to the bottom with Secure or HTTPOnly cookies, nor have we seen one with X-Frame-Options (although X-Frame-Options is relatively new). You're right, though, that we need buy-in from a sufficient number of browser vendors to avoid this danger. > With respect to self-signed certs: why don't you allow the header to > optionally specify the signature of a certificate that be present > somewhere in the certification chain? There are lots of ways of extending the header to address more use cases. For version 1, I think we should focus on addressing the core use case of a high-security site trying to protect its Secure cookies from active network attacks. That being said, I think we should define the grammar of the header in such a way (i.e., tolerant of additional tokens) that we can add more features in the future. > Regarding the Issue in section 10: it seems to me that the long-term > solution to this would be something like DNSSEC, right? I agree that once DNSSEC is the deployed, it will be a good way to deliver an STS policy. There's actually a proposal floating around to do exactly that, but I can't put my fingers on it at the moment. In any case, I don't think we want to wait for DNSSEC before deploying this feature. > Regarding "STS could be used to mount certain forms of DoS attacks, > where attackers set fake STS headers on legitimate sites available > only insecurely (e.g. social network service sites, wikis, etc.)": how > is this possible if the STS header is only respected from legitimate > secure connections? =A0The attacker would have to forge a legitimate > certificate for the domain in question for this to work, no? Imagine a deployment scenario like https://www.stanford.edu/ where students can host PHP files in their user directories. > Overall, though, if this gets implemented and deployed by major > commerce sites, it will make me feel a lot safer using HTTPS over an > untrusted connection. Thanks! Adam ------- Message 3 Date: Mon, 21 Sep 2009 12:14:01 -0400 From: Aryeh Gregor <Simetrical+w3c@gmail.com> To: Adam Barth <w3c@adambarth.com> cc: "=JeffH" <Jeff.Hodges@kingsmountain.com>, public-webapps@w3.org Subject: Re: fyi: Strict Transport Security specification On Mon, Sep 21, 2009 at 11:51 AM, Adam Barth <w3c@adambarth.com> wrote: > Folks debugging their own site have lots of tools to figure out what's > going on. =C2=A0I'd probably recommend a tool like Fiddler2, which gives > much more technical details than a browser UI. =C2=A0If an end user want = to > get around the error, they can clear the STS state in much the same > way they can manage cookies. =C2=A0The main consideration is not to have = a > UI flow from the error page to the "ignore this error" action. Agreed. > There are lots of ways of extending the header to address more use > cases. =C2=A0For version 1, I think we should focus on addressing the cor= e > use case of a high-security site trying to protect its Secure cookies > from active network attacks. =C2=A0That being said, I think we should > define the grammar of the header in such a way (i.e., tolerant of > additional tokens) that we can add more features in the future. Yes, that sounds like a good plan. > I agree that once DNSSEC is the deployed, it will be a good way to > deliver an STS policy. =C2=A0There's actually a proposal floating around = to > do exactly that, but I can't put my fingers on it at the moment. =C2=A0In > any case, I don't think we want to wait for DNSSEC before deploying > this feature. No, definitely not. > Imagine a deployment scenario like https://www.stanford.edu/ where > students can host PHP files in their user directories. Ah, so this is exactly what you were discussing with Jonas. That makes sense, yes. Such a site would be very insecure anyway, of course -- any student could read any other student's app's cookies. But I understand the concern, anyway. ------- Message 4 Date: Mon, 21 Sep 2009 16:00:14 -0700 From: =JeffH <Jeff.Hodges@KingsMountain.com> To: public-webapps@w3.org Subject: Re: fyi: Strict Transport Security (STS) specification Just to fill in a bit amongst Adam's coverage of Aryeh's good questions.. Adam replied: > Aryeh asked: >> "Underlying secure transport error" could use definition -- I'd >> imagine it would include, among other things, an unrecognized root >> certificate, an expired or revoked certificate, etc., but perhaps it >> could be made clearer. Or do other relevant specs already define this >> term? > > You're correct about the meaning of the term. Clarifying what we mean > is probably a good idea. Agreed. > There's a question of how specifically we want to tie this to TLS. Well, one way to address it is to have normative subsections for specific underlying secure transports, e.g. TLS and SSH3 (at least) that specifically identify the (class of) errors that the sec transport layer can relay "up the stack" as warnings (and that aren't out-of-hand fatal for the transport (many are)), and note that they are to be treated as fatal by the HTTP layer. For TLS, see RFC4346 top of pg 31 -- errors not noted as fatal on prior pages are possibly warnings at the discretion of clients or servers, and we want to that (me thinks) and say "if you get any such warnings from the TLS impl, treat them as fatal and follow TLS procedures to shut down the TLS connection". Would that be specific enough? Adam replied: > Aryeh asked: >> Regarding the Issue in section 10: it seems to me that the long-term >> solution to this would be something like DNSSEC, right? > > I agree that once DNSSEC is the deployed, it will be a good way to > deliver an STS policy. There's actually a proposal floating around to > do exactly that, but I can't put my fingers on it at the moment. shucks, I'd intended to stick DNSSEC into that Issue box along with the other stuff in there. Good catch. Yes, STS policy /could/ be delivered in other fashions, some form of DNS-based metadata coupled with DNSSEC is one. DNSSEC coupled with other (emergent) metadata (mentioned in that issue) approaches are others. > Aryeh asked: >> Regarding "STS could be used to mount certain forms of DoS attacks, >> where attackers set fake STS headers on legitimate sites available >> only insecurely (e.g. social network service sites, wikis, etc.)": how Well, the other thing that's meant there too, and which isn't really, is that an admin of hosting.example.com could malevolently/inadvertently DoS {sites}.hosting.example.com given the manner this is presently designed. But, we feel that given that a superdomain admin currently can do all sorts of nasty things to a subdomain either malevolently/inadvertently (e.g. take 'em out of the zone file) that this really doesn't alter the status quo. Thanks for your feedback. HTH, =JeffH PayPal InfoSec Team ------- End of Forwarded Messages
Received on Tuesday, 8 December 2009 21:33:20 UTC