W3C home > Mailing lists > Public > public-web-security@w3.org > December 2009

Jonas' comment thread wrt Strict Transport Security (STS)

From: =JeffH <Jeff.Hodges@KingsMountain.com>
Date: Tue, 08 Dec 2009 13:22:28 -0800
Message-ID: <4B1EC394.8030506@KingsMountain.com>
To: W3C Web Security Interest Group <public-web-security@w3.org>
------- Forwarded Messages

Date:    Fri, 18 Sep 2009 22:30:15 -0700
From:    Jonas Sicking <jonas@sicking.cc>
To:      "=JeffH" <Jeff.Hodges@kingsmountain.com>
cc:      public-webapps@w3.org, Jeff Hodges <jeff.hodges@paypal.com>,
	 Adam Barth <abarth@eecs.berkeley.edu>, Collin Jackson <collin.jackson@
	  sv.cmu.edu>
Subject: Re: fyi: Strict Transport Security specification

On Fri, Sep 18, 2009 at 6:00 PM, =JeffH <Jeff.Hodges@kingsmountain.com> wrote:
 > We are interested in bringing this work to W3C WebApps Working Group as a
 > Recommendation-track specification. We are willing to license it under W3C
 > terms, we understand that it may change due to implementer or public
 > feedback,
 > and that should it be of interest to other implementors, we're willing to
 > contribute to editorial and test suite efforts.
 >
 > We're looking forward to the WebApps WG's feedback and comments.

This definitely looks very interesting. I am admittedly a bit worried
about requests to one url to a server affecting any subsequent
requests to not just that server, but also to any subdomain.

I wonder for example if the client when receiving a
Strict-Transport-Security header should make a request to the root url
of the same origin to verify that the server indeed wants to opt in to
STS.

However, I definitely think this is a draft worth publishing in order
to reach a broader group of people for comments.

But, while I don't personally care which standards organization is in
charge of publishing this, I suspect that you'll get the feedback that
IETF is the correct place to publish this spec.

/ Jonas

------- Message 2

Date:    Fri, 18 Sep 2009 22:54:43 -0700
From:    Adam Barth <w3c@adambarth.com>
To:      Jonas Sicking <jonas@sicking.cc>
cc:      "=JeffH" <Jeff.Hodges@kingsmountain.com>, public-webapps@w3.org,
	 Jeff Hodges <jeff.hodges@paypal.com>, Collin Jackson <collin.jackson@s
	  v.cmu.edu>
Subject: Re: fyi: Strict Transport Security specification

On Fri, Sep 18, 2009 at 10:30 PM, Jonas Sicking <jonas@sicking.cc> wrote:
 > I wonder for example if the client when receiving a
 > Strict-Transport-Security header should make a request to the root url
 > of the same origin to verify that the server indeed wants to opt in to
 > STS.

That's a good idea.  Do you think we should do that for all instances
of Strict-Transport-Security, or just for headers with the
includeSubDomains directive?

Adam

------- Message 3

Date:    Sat, 19 Sep 2009 01:46:16 -0700
From:    Jonas Sicking <jonas@sicking.cc>
To:      Adam Barth <w3c@adambarth.com>
cc:      "=JeffH" <Jeff.Hodges@kingsmountain.com>, public-webapps@w3.org,
	 Jeff Hodges <jeff.hodges@paypal.com>, Collin Jackson <collin.jackson@s
	  v.cmu.edu>
Subject: Re: fyi: Strict Transport Security specification

On Fri, Sep 18, 2009 at 10:54 PM, Adam Barth <w3c@adambarth.com> wrote:
 > On Fri, Sep 18, 2009 at 10:30 PM, Jonas Sicking <jonas@sicking.cc> wrote:
 >> I wonder for example if the client when receiving a
 >> Strict-Transport-Security header should make a request to the root url
 >> of the same origin to verify that the server indeed wants to opt in to
 >> STS.
 >
 > That's a good idea. =A0Do you think we should do that for all instances
 > of Strict-Transport-Security, or just for headers with the
 > includeSubDomains directive?

The most conservative thing to do would be something like this:

If a request is made to a http-url where no prior STS knowledge exists:
1. Make the request as normal
(am I understanding it correctly that http requests can't opt in to STS?)

If a request is made to a http-url which has been marked as "confirmed":
1. Change the url to use a https url instead
2. Mark the request as normal

If a request is made to a http*s*-url where no prior STS knowledge exists:
1. Make the request as normal
2. If the response contains an STS header, mark the origin with the
"unconfirmed" flag.
3. If the response contains an STS header with the "includeSubDomains"
statement, mark the origin with the additional flag
"includeSubDomains"

If a request is made to a http-url where the origin of the url is
marked "unconfirmed", or where a parent origin is marked with
"includeSubDomains"
1. Make the request to the "/" resource on that origin
2. If the response contains an STS header, mark the origin as
"confirmed" and remove the "unconfirmed" flag.
3. If the response does not contain an STS header, remove any flags
for the origin. (May event want to flag it with an "opted out of STS"
or some such).
4. Follow the appropriate one of the first two rules.


This is somewhat of a simplification since you also need to take
max-age into account and such. But I hope you get the general idea.

/ Jonas

------- Message 4

Date:    Sat, 19 Sep 2009 07:49:20 -0700
From:    Adam Barth <w3c@adambarth.com>
To:      Jonas Sicking <jonas@sicking.cc>
cc:      "=JeffH" <Jeff.Hodges@kingsmountain.com>, public-webapps@w3.org,
	 Jeff Hodges <jeff.hodges@paypal.com>, Collin Jackson <collin.jackson@s
	  v.cmu.edu>
Subject: Re: fyi: Strict Transport Security specification

On Sat, Sep 19, 2009 at 1:46 AM, Jonas Sicking <jonas@sicking.cc> wrote:
 > (am I understanding it correctly that http requests can't opt in to STS?)

Well, they opt in by redirecting to HTTPS and then sending the header
over HTTPS.  :)

One virtue of your algorithm is that there are no extra requests in
the common cases.  For example, if the site does everything over
HTTPS, then we never have to confirm the STS directive.  Also, if the
user enters the site by typing "example.com" in the location bar, then
we also won't make any extra requests because the first HTTPS URL
we'll see is "/" anyway.

The only potentially tricky situation is that, when we look for
confirmation, we need to be prepared to deal with an attacker who
blocks that requests (because we're now in an attack scenario), but I
think we can deal with that by stalling the HTTP request while we wait
for confirmation.

Adam

------- Message 5

Date:    Sat, 19 Sep 2009 09:24:59 -0600
From:    "Steingruebl, Andy" <asteingruebl@paypal.com>
To:      "Jonas Sicking" <jonas@sicking.cc>,
	 "=JeffH" <Jeff.Hodges@kingsmountain.com>
cc:      <public-webapps@w3.org>,
	 "Hodges, Jeff" <jeff.hodges@paypal.com>,
	 "Adam Barth" <abarth@eecs.berkeley.edu>,
	 "Collin Jackson" <collin.jackson@sv.cmu.edu>
Subject: RE: fyi: Strict Transport Security specification

 > -----Original Message-----
 > From: public-webapps-request@w3.org
[mailto:public-webapps-request@w3.org] On Behalf Of Jonas Sicking
 > Sent: Friday, September 18, 2009 10:30 PM
 > To: =3DJeffH
 > Cc: public-webapps@w3.org; Hodges, Jeff; Adam Barth; Collin Jackson
 > Subject: Re: fyi: Strict Transport Security specification


 > This definitely looks very interesting. I am admittedly a bit worried
 > about requests to one url to a server affecting any subsequent
 > requests to not just that server, but also to any subdomain.

 > I wonder for example if the client when receiving a
 > Strict-Transport-Security header should make a request to the root url
 > of the same origin to verify that the server indeed wants to opt in to
 > STS.

I think what you're pointing out is that our notion of Origin, and the
ability in the usual web programming model for regular user-applications
to have full access to the HTTP layer is security problematic in certain
deployment scenarios. =20

I don't think that your solution is really workable and/or solves the
problem at hand. Because / and /stuff.html and /~user/cgi-bin/web.cgi
are all part of the same Origin, they already can muck with the whole
security model anyway.    If a website doesn't wish to have this content
muck around with the overall security policies then they are already in
serious danger by allowing people to control the HTTP layer from this
other content. =20

This "sub-content" can already:
   - Set the Secure or HTTPonly flag on cookies in possible contradiction
of the policy of "/"
   - Read and write all cookies for the domain
=20
If a site wants to prevent this sort of behavior it needs to either:

   1. Filter certain HTTP data from making it to certain content running
within it. For example, put in a filter
      that doesn't pass cookies on to certain URIs.
   2. Filter outbound data coming from certain URIs to prevent them from
setting certain data.

If a site doesn't want to give control over STS policy to all of its
content, then it can choose to implement the second of these two
policies. If it doesn't, it is still open to all manner of other
attacks.  Collin and Adam already documented most/all of this in their
paper "Beware of Finer-Grained Origins" -
http://w2spconf.com/2008/papers/s2p1.pdf

- --
Andy Steingruebl



------- End of Forwarded Messages
Received on Tuesday, 8 December 2009 21:23:02 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Sunday, 19 December 2010 00:16:01 GMT