W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2020

Re: Cookies and schemes.

From: Mike West <mkwst@google.com>
Date: Sun, 15 Mar 2020 09:22:36 +0100
Message-ID: <CAKXHy=dtjdChTaCTmDYeK6EZfpjZr4M5fShjgTaK=b-N6z_f5g@mail.gmail.com>
To: Willy Tarreau <w@1wt.eu>
Cc: Martin Thomson <mt@lowentropy.net>, Steven Bingler <bingler@google.com>, HTTP Working Group <ietf-http-wg@w3.org>
Hey folks,

I sketched out changes in a little more detail in an -01 of
https://tools.ietf.org/html/draft-west-cookie-incrementalism-01. That
sketch doesn't include either `__Nonsecure-` or `Sec-Nonsecure-Cookie`, as
I agree with Martin's suggestion that we should avoid them if possible. It
might be better for any such mitigation to live as a UA-specific workaround
rather than part of the core specification.

That said, I think I also agree with Willy's implicit claim that they will
be necessary for some period of time. The general experience we've had with
recent deprecations is that despite our best efforts, developers do not get
the message until we begin deploying a change. It seems quite reasonable to
me to plan for a multi-stage deprecation to minimize the potential for data
loss. That's almost certainly the path we'd try to take in Chromium. Other
user agents might be interested in being more aggressive.

Thanks again for the feedback! Willy, I'll pull out a few specific points
to respond to below:

On Tue, Mar 10, 2020 at 9:51 PM Willy Tarreau <w@1wt.eu> wrote:

> On Tue, Mar 10, 2020 at 08:29:50AM +0100, Mike West wrote:
> > 2.  It does not put those cookies into the `Cookie` header, meaning that
> a
> > host that doesn't intentionally perform a migration (and, in the best
> case,
> > validate the data against whatever securely-delivered state it has access
> > to before blindly accepting it) won't be at risk.
> It will be worse. Those having trouble configuring them will modify their
> LBs to put everything into cookie and systematically duplicate them into
> the other ones, considering that "the browser will figure which one it
> needs anyway".

I agree that this is likely, which is why I think it's important that any
stopgap is as short-lived as possible. The only way to ensure that
non-securely-set cookies can't influence the state of a secure site is for
the browser to stop sending them in any form. We should aim for that state
as quickly as practicable.

> If I understand the redirect proposed above, it would give network
> > attackers the ability to send arbitrary cookies to secure servers by
> > forging HTTP responses that set cookies and redirect to HTTPS. I'd like
> to
> > remain robust against this kind of attack.
> It's already the case by definition as long as the transmission operates
> in clear. Anything can be put in the response, including some links or
> whatever. If you have data showing that cookies are not passed anymore
> cross redirects from HTTP to HTTPs, then that's fine and it means that
> we don't need this anymore. But I never received complaints that haproxy
> started to lose affinity on redirects, which makes me think it's still
> valid.

I agree that this kind of redirect works well in the status quo. This
proposal explicitly aims to break it in the future, as it seems bad for an
otherwise-secure site to be forced to trust data created over a non-secure

You're correct to say that there's little the browser can do to prevent a
non-secure site from sharing data with a secure site via other means (URL
parameters, server-to-server communication, etc). I don't think that kind
of one-off capability is a reasonable argument for baking that attack into
the state management mechanism that user agents offer as part of the

> > 2.  Perhaps we prefix the non-secure cookie names with `__Non-secure-`
> > rather than minting a new header?
> I really like this. It's by far the easiest solution. Usually nobody
> cares about the load balancer's cookie name because the LB will remove
> it before passing the request to the server, so this one can be changed
> at will by the person responsible for the LB. There are always edge
> cases of course, like a management page hosted on the application to
> perform some specific tests or maintenance operations but the 1% doing
> this know exactly what they're doing and will figure a different way
> to do it.

For clarity: I did not intend for these cookies to stick around. I intended
this as a temporary carveout for applications to migrate their user data
from HTTP to HTTPS, not as something that would stick around in perpetuity.
Just as we don't allow `http://example.com/` to share `localStorage`,
databases, caches, etc with `https://example.com/`, I don't think there is
a good long-term solution for writing cookies in a non-secure context that
are trivially legible in a secure context.

The problem is that by breaking 10% of the web every 3 months we're
> making everyone totally incompetent on infrastructure, rendering the
> whole web totally insecure. I'd rather use motivation that trouble
> making. Using the cookie name is perfect because it is visible in the
> browser. So when your site requires that all your users see they learn
> cookies called "insecure_something", you do have a great motivation to
> try to improve the situation without being forced to rush an even more
> horribly insecure hack to suit a browser's next imposed deadline that
> threatens to destroy your site.

We do need to come up with reasonable timelines for deployment if we agree
that this is the right direction in which to move. I am sympathetic to the
concern that we're dripping out changes over time, and it might well be
worthwhile to bundle up larger sets of changes so that developers can
respond to them at once.

> > I don't think we should add new attributes in order
> > to support sites that push users back and forth from HTTPS to HTTP.
> I think that once the cost of switching the connection to HTTPS has been
> digested, there's no point in going back to HTTP from HTTPS. However,
> easing the transition from HTTP to HTTPS seems very important to
> encourage migrating ealier in the session, even if reaching that
> state requires several baby steps from the site's operator.

My intuition is that we're already pushing pretty hard on HTTP->HTTPS
migrations, and have built several tools over the past few years to help
both inside (`Upgrade-Insecure-Requests`, CSP reporting for `http:`
resources, "Not Secure" labels, etc) and outside (search console warnings,
various scanning/rating tools, etc) browsers.

At the same time, we really need to remove residual risk from non-secure
channels. The proposal in the document we're discussing seems to me a
reasonable compromise: non-secure channels are segregated from secure
channels, and only have access to temporary client-side state. This will
sincerely narrow the window of opportunity for network attackers, without
entirely breaking things like your printer's web interface.

I welcome feedback about the balance that compromise strikes. I'm certain
there are ways to tweak it that will have better results for all sides. I'm
reluctant to do so in ways that allow developers to continue pretending
that shipping non-secure sites is acceptable, but I think there's
substantial willingness to find solutions deployable in the short-term.

Thanks again for y'all's comments!

Received on Sunday, 15 March 2020 08:23:02 UTC

This archive was generated by hypermail 2.4.0 : Sunday, 15 March 2020 08:23:05 UTC