W3C home > Mailing lists > Public > public-webappsec@w3.org > January 2016

Re: Limiting requests from the internet to the intranet.

From: Brian Smith <brian@briansmith.org>
Date: Fri, 8 Jan 2016 17:40:26 -1000
Message-ID: <CAFewVt5YFPMUt0A5QFSFm_nS0iNKfWhW3kXJgKZQ99LUX6WTVw@mail.gmail.com>
To: Justin Schuh <jschuh@google.com>
Cc: Mike West <mkwst@google.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Brad Hill <hillbrad@gmail.com>, Dan Veditz <dveditz@mozilla.com>, Ryan Sleevi <sleevi@google.com>, Devdatta Akhawe <dev@dropbox.com>, Anne van Kesteren <annevk@annevk.nl>, Chris Palmer <palmer@google.com>
On Fri, Jan 8, 2016 at 1:55 PM, Justin Schuh <jschuh@google.com> wrote:

> We seem to be drifting a bit off target here, so let me try to clarify the
> purpose as the original instigator. We have entire classes of devices and
> localhost-serving software that are built with no expectation that they
> would be exposed to hostile traffic and are thus riddled with all manner of
> vulnerabilities. Unfortunately, the browser is currently providing a pivot
> point that exposes these vulnerabilities to attackers.
>

localhost is quite from the rest of private addresses.


> Put more directly: *The browser is introducing a dangerous attack surface
> by allowing remote sites to interact with unsecured local and private
> networks, and thus the burden should be on the browser to remove that
> attack surface where its presence is not explicitly requested or required.*
>

The browser is introducing a dangerous attack surface by allowing remote
sites to interact with unsecured remote sites too. But this is known. Even
as far as the private network situation is concerned, one of the papers
cited in Mike's document is from 2006.


> Then, with a test suite in hand, we can look at what additional mechanisms
>> a browser would need to implement. Interestingly, the test suite for the
>> new browser functionality would be the same as, or a subset of, the test
>> suite for the routers own mechanisms to "defend against CSRF." Thus, doing
>> the test suite first should not slow down the development of any browser
>> changes.
>>
>
> I think there might be some confusion here by assuming some artificially
> small set of vulnerability patterns, such as a trivially quantifiable
> subset of XSRF.  However, as I mentioned above, what we're really dealing
> with is the entire gamut of Web vulnerabilities on devices that were never
> intended to have these interfaces exposed to hostile traffic. So, your
> proposal essentially amounts to: solve the entire problem of secure
> Web-facing software development. And I'm certain you weren't intending to
> set the bar quite that high.
>

<snip>


> Once again the issue here is assuming artificially narrow scoping.
>

One might read your statement here as saying that I am the one narrowing
the scope. CSRF against routers is the one thing that is cited in the
documents given as motivation in the introduction. That's why I focused on
that. It would be useful to add some references to vulnerabilities that
don't require CSRF to work to show why fixing CSRF isn't enough.


> We know this is a high-risk project from past experience. Mozilla tried to
>> solve this problem in Firefox and had to back out the change [1].
>>
>
> The patch you're referencing appears to have been backed out of Firefox
> due to bugs in the implementation, not because of the general approach.
> I've personally experienced how something can be more difficult to
> implement in one browser over another, and also appreciate how we should
> make ease of implementation a priority to the extent we can. However, I
> don't understand how that should be interpreted as an argument against a
> superficially similar approach. Or, did I miss some context that would
> better demonstrate the point here?
>

Like I said, changing how browsers work for public -> private communication
by default is high risk. "High risk" doesn't mean "bad." It will likely
take a long time to work out exactly what would be compatible-enough to
work. We might very well find out that nothing is compatible enough and the
whole thing gets canceled. Further, the more conservative browsers are
likely to take a very long time to deploy it, if they ever do. If there is
no parallel effort to improve the situation from the device-maker side then
there will still be failures on a massive scale indefinitely.


> Browser developers good visibility into intranets and other private
>> networks to find and understand problems. These are all indications that
>> any change to the default navigation, iframe embedding, or XHR behavior of
>> web browsers to mitigate the issues is likely to take many iterations, and
>> thus a lot of time, to get right. Thus, a parallel approach of outreach to
>> device makers and browser development makes the most sense.
>>
>
> Yes, outreach can be a very useful tool, and I'm all for it. But, the
> proposal here provides a full guarantee of backwards compatibility via site
> opt-in,
>

"backwards compatibility via site opt-in" is not backward compatible.
Again, pointing this out isn't the same thing as saying that the proposal
is bad. The point of mentioning it is to raise awareness of how high the
risk of failure is.


> I don't see a reason to be unnecessarily pessimistic or prepare to leave
> users at risk any longer than necessary.
>

There's never a reason to be unnecessarily pessimistic, and we shouldn't
waste time preparing to leave users at risk any longer than necessary.


> Accepting that, a middle ground might be to encourage a softer phase-in.
> In the case of Chrome, we could potentially have the initial UI provide
> user overrides. So, the interstitial would allow a direct click-through and
> the sub-resource loads would use something like the mixed-content shield.
> And once we got to a point where we're comfortable that we've minimized
> conflicts, we could remove the overrides from the UI (similar to how we
> handled NPAPI deprecation).
>

The interstitial would effectively be CSRF protection, right? Presumably it
would only be for (i)frames and navigation. I suspect these are the cases
most likely to cause compatibility issues. I don't think that there would
be as many compatibility issues for <script> and <img>. I could see there
being compatibility issues with non-CORS POST.

If these web interfaces are really terrible, the CORS preflights themselves
might be a security issue; e.g. buffer overflow because "OPTION" is longer
than "GET" or "POST". XSS, some kinds of CSRF, buffer overflows, and other
things could very well apply too, especially if the web server ignores the
HTTP method.

I still think the lack of a threat model more informative than "everything
that this proposal mitigates and nothing it doesn't mitigate" makes it
difficult to reason about the proposal. You're effectively saying "here is
the solution" without stating the problem in a way that allows people to
analyze it.

Hopefully I've already addressed the bullets inline and high fives are
> still on the table.
>

Awesome!

Cheers,
Brian
-- 
https://briansmith.org/
Received on Saturday, 9 January 2016 03:40:56 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:17 UTC