Re: Feedback on the Strict-Transport-Security specification (part 1)

------- Forwarded Message

Date:    Thu, 03 Dec 2009 11:56:57 -0800
From:    =JeffH <Jeff.Hodges@PayPal.com>
To:      Eric Lawrence <ericlaw@exchange.microsoft.com>
cc:      W3C WebApps WG <public-webapps@w3.org>
Subject: Re: Feedback on the Strict-Transport-Security specification (part 1)

[Apologies for latency, I was pretty much buried/OOTO during Nov.]

Many thanks to EricLaw for his detailed review, and to Adam for the detailed
reply.

Below is my build on Adam's responses (part 1). In a separate msg (part 2),
I'll respond to the (editorial) items that Adam didn't address. Also, I'll
start a separate thread wrt "mixed content" (aka "mixed security context").

=JeffH
- ------

Adam replied:
  > On Tue, Oct 27, 2009 at 5:01 PM, Eric Lawrence
  > <ericlaw@exchange.microsoft.com> wrote:
  >
  > [mixed content snipped]
  >
  >> [Section 2.4.2: Detailed Core Requirements]: 4.UAs need to re-write all
  >> insecure UA "http" URI loads to use the "https" secure scheme for those web
  >> sites for which secure policy is enabled.  This requirement is
  >> insufficiently specific and does not really explain what "rewrite" means?
  >> Does this mean that the HTML parser will detect any insecure-but-should-be
  >> URIs and rewrite them within the markup, such that JavaScript could observe
  >> the change in the HREF attribute?
  >
  > This is how our original prototype worked, but I don't think that's
  > how the real implementations should work.
  >
  >> Or does it simply mean that upon
  >> de-reference the URI is automatically "upgraded" to HTTPS with no notice to
  >> the caller?
  >
  > What I'd recommend here is to treat the HTTP-to-HTTPS "rewrite" as a
  > simulated 307 redirect, like the one the site is supposed to provide
  > if we actually retrieved the HTTP URL.

Actually, we specify a "301" in the spec, section 6.2.

The above discussion of course is about (and depends upon) browser
implementation details.


  >> [Section 2.4.2: Detailed Core Requirements]: Requirements #5 and #6 are
  >> problematic because browsers (generally speaking) often don't have rock
  >> solid knowledge of where the proper "private domain" / "public suffix"
  >> transition occurs.
  >
  > I think there might be some confusion about what "higher-level" means
  > in this context.  The intent is that:
  >
  > 1) both example.com and foo.example.com could set policy for
  > bar.foo.example.com.
  > 2) Neither bar.foo.example.com nor foo.example.com could set policy
  > for example.com.
  > 3) bar.foo.example.com cannot set policy for foo.example.com.
  > 4) foo.example.com cannot set policy for qux.example.com.
  >
  > etc.
  >
  > I don't think we need a notion of a public suffix to enforce these rules.

agreed.


  >> [Section 5.1: Syntax] Are the tokens intended to be interpreted
  >> case-sensitively?
  >
  > Yes.  I think this is implied but the grammar style Jeff using, but it
  > might be worth noting for us non-ABNF experts.

Yes, quoted strings in the ABNF are case-insensitive by default. I can add some

notes wrt ABNF details.

I'm also thinking we ought to ref draft-ietf-httpbis-p1-messaging-08 & rfc5234
rather than rfc2616 wrt ABNF as the former is getting close to last call.


  >> [Section 5.1: Syntax] What should be done if the server has multiple
  >> Strict-Transport-Security response header fields of different values?
  >
  > My opinion is we should honor the most recently received header, both
  > within a request and between requests.

agreed. i.e. in a given response, first occurance wins.

across received responses for a given STS server, most recently received header

wins.


  >> [Section 6.1: HTTP-over-Secure-Transport Request Type] Why must the server
  >> include this header on every response?  This seems likely to prove
  >> prohibitively difficult across, say, Akamai load balancers for images, etc.
  >> What happens if the server fails to include the response header on a given
  >> response?
  >
  > I think that's a server conformance requirement.  The UA conformance
  > requirements are set up so that this doesn't matter too much.  As long
  > as you get your entry in the STS cache, you'll be fine.

so this is a good point it seems. Given the UA behaivor, the server /can/ be
more relaxed. I'll think about how to describe this in that section. Perhaps
changing the MUST to a SHOULD, and explaining the ramifications of not
returning STS on every response will do it.


  >> [Section 6.2] A STS Server must not include the Strict-Transport-Security
  >> HTTP Response Header in HTTP responses conveyed over a non-secure
  >> transport.  Why not?  It seems harmless to include if the UA doesn't respec
t
  >> it.
  >
  > Again, this is a server conformance requirement that doesn't affect
  > UAs.  It doesn't make sense to send the header here.  We might as well
  > prohibit servers from sending it.

well, there's security considerations here. If the STS header is conveyed also
over insecure transport, then it is possible an attacker can turn off STS
policy for a victim site.

There's also the desire to avoid DoS attacks where an attacker sets STS for a
site that's available only (or for critical pieces) only insecurely (HTTP/TCP).

Need to add these to sec cons.


  >> [Section 7.1] What if the STS response header is present but contains no
  >> tokens?  7.1 suggests that the header alone indicates an STS server.
  >
  > That sounds like a bug.  An empty header should be a no-op.

agreed.


  >> [Section 7.1.1; Design Decision #4] I know there are reasons to avoid using
  >> secure protocols to IP-literal addressed servers, but in Intranet
  >> environments this may be expected and desirable. Why forbid it here?
  >
  > I don't think there's any way to provide security in this case.  My
  > understanding is that anyone can get these certificates.  Is there
  > some benefit to supporting these cases?  Maybe CAs might change their
  > policies in the future?

We actually discussed this (supporting IP-literal or IPv4address addressed STS
servers) a fair bit amongst ourselves. Glad you brought it up.

The concerns against supporting IP-literal or IPv4address addressed STS servers

(as I understand them) are that..

1. one (supposedly) can't get a "legitimate" cert for an IP-literal or
IPv4address addressed server.

2. the STS policy is intended for use by legitimate Internet-facing websites,
which will all have domain names and proper certs.

3. disallowing it simplifies the "Known STS Server Domain Name Matching" in
section 7.1.2.

FWIW, from my personal experience, I can see how STS might be used in at least
a testing/pilot manner (e.g. in an Intranet environment as EricLaw notes) using

such addresses.


Note also that in terms of domain name matching, we received these
comments/observations from a colleague..

"You need to think about & document how implementers must handle TLDs and
country domains / funny little domains (co.uk, .ca, etc.) as well as custom, or

internal TLDs (.production or .qa). Some of these could lead to DOS conditions.
  ...

I was thinking about where to draw the line with sub-domains - so as to prevent

catastrophic DOS attacks. I agree they are unlikely, as CAs (SHOULD) hand these

certificates out only very cautiously to people who are actually responsible,
but if you can dis-empower the misuse of TLD certificates that would probably
be best.

BTW. a certain commercial CA allows you to buy a cert for e.g. https://mail/
and doesn't check anything - not really SSL if the CA will give it to anyone
with a credit card (yes someone I know bought that one, but they won't object
to selling you one too). On internal networks, weird sub-domains like that may
actually targets for abuse if there aren't rules around the number of dots. If
I can buy a "qa" certificate and set up a site that blocks server4.qa, it would

create a bit of a mess."

The above at least needs to be noted in sec cons.


  >> [Section 7.1.2] While I understand the restrictions imposed here, it is
  >> something of a shortfall that https://www.example.com cannot enforce STS fo
r
  >> requests to http://example.com.  The threat here is obvious: the user
  >> typically visits https://www.paypal.com and gets STS applied, but in a
  >> coffee shop or untrusted network, inadvertently types just "paypal.com" in
  >> the address bar.  Because STS isn't applied cached for that server, possibl
e
  >> exploit occurs.
  >
  > The thought is that https://www.paypal.com/ can load an image from
  > https://paypal.com/ to enable STS for the root domain.  Letting
  > www.paypal.com opt in for paypal.com is going to lead to a bunch of
  > unhappy people who type "paypal.com" and reach an hard blocking page
  > if there is a CN mismatch.

agreed.

There's been a fair bit of disagreement over even having the STS policy feature

the "includeSubDomains" directive itself.



  >> [Section 10] I was disappointed not to see any mention of the privacy
  >> implications of STS hostname storage, and/or recommendations on how such
  >> storage should interact with browser "private modes" and/or cleanup
  >> features.
  >
  > We should add this discussion.

agreed.



  >> Other thoughts: Should STS offer a flag such that all cookies received from
  >> the STS server would be automatically upgraded to "SECURE" cookies?
  >
  > I think this is a good idea for an new token in a future version.  I'm
  > not sure whether Jeff has updated the grammar in the spec yet,

I will in the next spec version, per..

more flexible ABNF for STS?
http://lists.w3.org/Archives/Public/public-webapps/2009JulSep/1185.html



  >> One threat not mentioned is cross-component interactions.  This spec appear
s
  >> to primarily concern browsers, while the real-world environment is
  >> significantly more complex.  For instance, there are a number of file types
  >> which will automatically open in applications other than the browser when
  >> installed; those other applications may perform network requests to an STS
  >> host using a network stack other than that provided by the browser. That
  >> network stack may not support STS, or may not have previously cached STS
  >> entries for target servers. Thus a threat exists that out-of-browser
  >> requests could be induced that circumvent STS.
  >
  > For Internet Explorer, I would recommend coupling the STS cache with
  > the WinInet cookie jar.  That way, Secure cookies in Internet Explorer
  > would be protected by STS even in external applications.


good catch, yet more subtle stuff to note in advice and sec cons.


- ---
end




------- End of Forwarded Message

Received on Tuesday, 8 December 2009 21:37:10 UTC