W3C home > Mailing lists > Public > public-webappsec@w3.org > January 2016

Re: Limiting requests from the internet to the intranet.

From: Justin Schuh <jschuh@google.com>
Date: Fri, 8 Jan 2016 15:55:52 -0800
Message-ID: <CAObUUC-jnwUTG-8abHO+0uQWnAcHTxGpbfrDNsQEQK6Hn=9AJQ@mail.gmail.com>
To: Brian Smith <brian@briansmith.org>
Cc: Mike West <mkwst@google.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Brad Hill <hillbrad@gmail.com>, Dan Veditz <dveditz@mozilla.com>, Ryan Sleevi <sleevi@google.com>, Devdatta Akhawe <dev@dropbox.com>, Anne van Kesteren <annevk@annevk.nl>, Chris Palmer <palmer@google.com>
We seem to be drifting a bit off target here, so let me try to clarify the
purpose as the original instigator. We have entire classes of devices and
localhost-serving software that are built with no expectation that they
would be exposed to hostile traffic and are thus riddled with all manner of
vulnerabilities. Unfortunately, the browser is currently providing a pivot
point that exposes these vulnerabilities to attackers.

Put more directly: *The browser is introducing a dangerous attack surface
by allowing remote sites to interact with unsecured local and private
networks, and thus the burden should be on the browser to remove that
attack surface where its presence is not explicitly requested or required.*

The key points at issue are:

   - Is the existing exposure generally critical to the operation of these
      - My take: *No. Not for the vast majority of cases.*
   - Do we have a way of blocking this exposure where it is not required?
      - My take: *Yes. Mike's proposal establishes a secure default by
      covering the majority of cases. It won't cover everything, but it will
      dramatically reduce the exposure of vulnerable devices.*
   - Can we provide compatibility mechanisms for software that requires
   this exposure?
      - My take: *Yes. Accepting that we still need to hash out the exact
      shape these mechanisms should take.*

Now, I'll try to cover your specific questions inline.

On Fri, Jan 8, 2016 at 12:49 PM, Brian Smith <brian@briansmith.org> wrote:

> Mike West <mkwst@google.com> wrote:
>> I've put together a kinder, gentler take on hardening the user agent
>> against the kinds of attacks that such requests enable:
>> https://mikewest.github.io/cors-rfc1918/. It's pretty rough, as I've
>> only poked at it sporadically over the holidays, but I think there's enough
>> there to get a conversation going.
> First, it seems wrong that no router makers are represented in this
> thread. (I heard that Chromium OS is the foundation of Google OnHub, which
> is an OS for routers, so the Googlers are router software makers in some
> sense. However, IIUC, Google OnHub uses and iPhone or Android app for
> configuration, not a web UI, so I guess OnHub isn't relevant to this
> discussion.) We should make some effort to bring the router makers into the
> discussion or move to a venue that is more relevant to them.
> Anyway, the premise of this work is that SOHO router makers (and makers of
> similar devices) are doing such a bad job at securing their configuration
> web apps that browsers need to do special things to defend the routers that
> they don't do to defend other web apps. But why are router makers doing a
> bad job? Are they doing worse than web app developers in general? How so?
> Are their products somehow more at risk than web apps in general? How so?

As I mentioned above, these devices were not built with any expectation of
hostile exposure from the internal network interfaces. You can debate
whether or not that was ever a realistic expectation on the part of their
makers, but it is the reality of the situation for the overwhelming
majority of consumer devices.

It seems wrong that nobody in this thread represents a SOHO router maker (I
> heard OnHub-based routers are based on Chromium OS, so the Googlers are in
> some sense "router toolkit makers").

Well, this isn't at all unique to routers. Sure, routers are the obvious
example, but the attack surface issue is also common to printers, TVs,
personal NAS boxes, and pretty much any other network-attached consumer
device or localhost-serving software. And the proposed change doesn't
affect the operation of the overwhelming majority of these devices. It
merely removes the attack surface that created the unintentional exposure.
As for the use cases that are affected (largely localhost-serving software)
the interested parties appear to be already ensuring their representation
(based on my CC list).

Mike's nice document says "[...] a router’s web-based administration
> interface must be designed and implemented to defend against CSRF on its
> own, and should not rely on a UA that behaves as specified in this
> document."

Well yes, that is to say that best practices are always best practices, and
I don't think anyone intends to absolve the makers of these devices and
software from whatever security responsibilities they should have.
Similarly, blaming the device and software makers doesn't absolve browser
makers from unnecessarily introducing attack surface.

My hypothesis is that the people making the vulnerable software aren't "web
> developers working on a router" but more "networking developers working on
> a web interface." Accordingly, it may be unreasonable to just say "defend
> against CSRF" and expect them to effectively do so.

More accurately, the list includes: XSRF, XSS, predictable default
passwords, memory corruption, command injection. I expect I missed a few
big ones, but honestly, it's an astonishingly long list given the
simplicity of the web interfaces exposed by most of these devices.

A better alternative, I think, would be to specify more clearly what is
> meant by "defend against CSRF" in a document specifically targeting the
> specific nature of SOHO routers and similar devices. That way router makers
> can sooner finish their work to "defend against CSRF" on their end. I also
> think it is essential to have a test suite and a reference implementation
> for router makers to read, use, and copy. Note that existing documentation
> on defending against CSRF from OWASP and others, is either quite hand-wavy
> or framework-specific. In the case of the OWASP documentation, there are
> too many choices, IMO, such that one could easily get overwhelmed and get
> trapped by the paradox of choice.
> Then, with a test suite in hand, we can look at what additional mechanisms
> a browser would need to implement. Interestingly, the test suite for the
> new browser functionality would be the same as, or a subset of, the test
> suite for the routers own mechanisms to "defend against CSRF." Thus, doing
> the test suite first should not slow down the development of any browser
> changes.

I think there might be some confusion here by assuming some artificially
small set of vulnerability patterns, such as a trivially quantifiable
subset of XSRF.  However, as I mentioned above, what we're really dealing
with is the entire gamut of Web vulnerabilities on devices that were never
intended to have these interfaces exposed to hostile traffic. So, your
proposal essentially amounts to: solve the entire problem of secure
Web-facing software development. And I'm certain you weren't intending to
set the bar quite that high.

Conversely, it is difficult to understand the given proposal without a test
> suite. For example, to what extent is it important or unimportant to
> disallow public->private top-level navigation? Is it only important to
> disallow that kind of navigation if the browser supports
> http://user:password@host URLs, or is blocking navigation for
> http://user:password@host sufficient? A test suite should be able to
> easily answer such questions.

Once again the issue here is assuming artificially narrow scoping. Network
devices and localhost-serving software run the gamut of authentication and
overall architectural patterns, just like any normal Web-facing sites. So,
this would similarly amount to enumerating all Web application architecture

We know this is a high-risk project from past experience. Mozilla tried to
> solve this problem in Firefox and had to back out the change [1].

The patch you're referencing appears to have been backed out of Firefox due
to bugs in the implementation, not because of the general approach. I've
personally experienced how something can be more difficult to implement in
one browser over another, and also appreciate how we should make ease of
implementation a priority to the extent we can. However, I don't understand
how that should be interpreted as an argument against a superficially
similar approach. Or, did I miss some context that would better demonstrate
the point here?

> Already in this thread we have people saying that the proposed browser
> changes would break their products.

And as I've already stated earlier in the thread, the ability to update
software is an absolute bare minimum requirement for reasonably secure
software. Otherwise, it's simply not possible to remediate vulnerabilities
that will inevitably be found and exploited. So, if a straightforward
update to introduce the pre-flight is not practical, then the software
stands effectively no chance of being safe when exposed to hostile traffic.

Browser developers good visibility into intranets and other private
> networks to find and understand problems. These are all indications that
> any change to the default navigation, iframe embedding, or XHR behavior of
> web browsers to mitigate the issues is likely to take many iterations, and
> thus a lot of time, to get right. Thus, a parallel approach of outreach to
> device makers and browser development makes the most sense.

Yes, outreach can be a very useful tool, and I'm all for it. But, the
proposal here provides a full guarantee of backwards compatibility via site
opt-in, and I'd strongly encourage any browser implementors to support the
same via configuration (as Chrome will). And the scope and impact we're
talking about here is very comparable to the ongoing refinement of
mixed-content handling. So, it's a problem we have a decent sense of and I
don't see a reason to be unnecessarily pessimistic or prepare to leave
users at risk any longer than necessary.

Accepting that, a middle ground might be to encourage a softer phase-in. In
the case of Chrome, we could potentially have the initial UI provide user
overrides. So, the interstitial would allow a direct click-through and the
sub-resource loads would use something like the mixed-content shield. And
once we got to a point where we're comfortable that we've minimized
conflicts, we could remove the overrides from the UI (similar to how we
handled NPAPI deprecation).

> * Let's make sure that the makers of the products that we're trying to
> help are actually involved in the discussion.
> * Let's build an open source test suite that device makers can use to
> improve their products.
> * Let's document, more specifically and precisely, what security measures
> router makers need to use to defend themselves against CSRF and other
> attacks.
> * Let's create a mockup router web UI, or modify an open source web UI, to
> use as a reference implementation to help router makers.
> * Let's derive and evaluate any spec for changing browser behavior from
> the test suite.
> * Let's recognize that there is a high risk of failure for changing
> browser behavior and that changing browser behavior only helps to a limited
> extent.
> * Let's trade high fives all around when it's all done.

Hopefully I've already addressed the bullets inline and high fives are
still on the table.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=354493
> Cheers
> Brian
> --
> https://briansmith.org/
Received on Friday, 8 January 2016 23:57:02 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:17 UTC