W3C home > Mailing lists > Public > public-webappsec@w3.org > January 2016

Re: Limiting requests from the internet to the intranet.

From: Devdatta Akhawe <dev.akhawe@gmail.com>
Date: Fri, 8 Jan 2016 19:21:18 -0800
Message-ID: <CAPfop_3_aEeGLKsRxbG51XiXyjVJiHNqrPdjvsfbnh=oZS1JEQ@mail.gmail.com>
To: Justin Schuh <jschuh@google.com>
Cc: Brian Smith <brian@briansmith.org>, Mike West <mkwst@google.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Brad Hill <hillbrad@gmail.com>, Dan Veditz <dveditz@mozilla.com>, Ryan Sleevi <sleevi@google.com>, Devdatta Akhawe <dev@dropbox.com>, Anne van Kesteren <annevk@annevk.nl>, Chris Palmer <palmer@google.com>
Hi Justin


On 8 January 2016 at 15:55, Justin Schuh <jschuh@google.com> wrote:
> We seem to be drifting a bit off target here, so let me try to clarify the
> purpose as the original instigator. We have entire classes of devices and
> localhost-serving software that are built with no expectation that they
> would be exposed to hostile traffic and are thus riddled with all manner of
> vulnerabilities. Unfortunately, the browser is currently providing a pivot
> point that exposes these vulnerabilities to attackers.
>
> Put more directly: The browser is introducing a dangerous attack surface by
> allowing remote sites to interact with unsecured local and private networks,
> and thus the burden should be on the browser to remove that attack surface
> where its presence is not explicitly requested or required.

I find this confusing.

As I view it, "apps are not designed with an expectation of being
exposed to malicious web traffic" is not a reason (in my mind) to
break apps or turn on a new requirement. What is a reason to turn on
protections is "suddenly browsers changed behavior and what was safe
for a long time isn't any more".

For example, a whole class of vulns (clickjacking, CSRF, XSS) were
unexpected to many applications when first identified. Browsers didn't
stop doing cross origin framing and cross origin requests because of
it.

For me the really key thing is when you said "the browser is
introducing a dangerous attack surface". Now, my possibly-wrong
understanding is that it was on for a long time (very beginning of
Chrome/Firefox?). Did browsers change recently to make this a big
threat to worry about?

As a hypothetical, lets say 2 years from now we learn that after this
change, all the routers are still vulnerable because they have started
blindly responding yes to the CORS preflights. Will we again
standardize a new protection because apps aren't secure?

Re the rest of your email: if I did agree with you about the need to
protect against such broken web apps, then your points sound mostly
reasonable.

cheers
Dev

>
>> A better alternative, I think, would be to specify more clearly what is
>> meant by "defend against CSRF" in a document specifically targeting the
>> specific nature of SOHO routers and similar devices. That way router makers
>> can sooner finish their work to "defend against CSRF" on their end. I also
>> think it is essential to have a test suite and a reference implementation
>> for router makers to read, use, and copy. Note that existing documentation
>> on defending against CSRF from OWASP and others, is either quite hand-wavy
>> or framework-specific. In the case of the OWASP documentation, there are too
>> many choices, IMO, such that one could easily get overwhelmed and get
>> trapped by the paradox of choice.
>>
>> Then, with a test suite in hand, we can look at what additional mechanisms
>> a browser would need to implement. Interestingly, the test suite for the new
>> browser functionality would be the same as, or a subset of, the test suite
>> for the routers own mechanisms to "defend against CSRF." Thus, doing the
>> test suite first should not slow down the development of any browser
>> changes.
>
>
> I think there might be some confusion here by assuming some artificially
> small set of vulnerability patterns, such as a trivially quantifiable subset
> of XSRF.  However, as I mentioned above, what we're really dealing with is
> the entire gamut of Web vulnerabilities on devices that were never intended
> to have these interfaces exposed to hostile traffic. So, your proposal
> essentially amounts to: solve the entire problem of secure Web-facing
> software development. And I'm certain you weren't intending to set the bar
> quite that high.
>
>
>> Conversely, it is difficult to understand the given proposal without a
>> test suite. For example, to what extent is it important or unimportant to
>> disallow public->private top-level navigation? Is it only important to
>> disallow that kind of navigation if the browser supports
>> http://user:password@host URLs, or is blocking navigation for
>> http://user:password@host sufficient? A test suite should be able to easily
>> answer such questions.
>
>
> Once again the issue here is assuming artificially narrow scoping. Network
> devices and localhost-serving software run the gamut of authentication and
> overall architectural patterns, just like any normal Web-facing sites. So,
> this would similarly amount to enumerating all Web application architecture
> patterns.
>
>
>> We know this is a high-risk project from past experience. Mozilla tried to
>> solve this problem in Firefox and had to back out the change [1].
>
>
> The patch you're referencing appears to have been backed out of Firefox due
> to bugs in the implementation, not because of the general approach. I've
> personally experienced how something can be more difficult to implement in
> one browser over another, and also appreciate how we should make ease of
> implementation a priority to the extent we can. However, I don't understand
> how that should be interpreted as an argument against a superficially
> similar approach. Or, did I miss some context that would better demonstrate
> the point here?
>
>
>>
>> Already in this thread we have people saying that the proposed browser
>> changes would break their products.
>
>
> And as I've already stated earlier in the thread, the ability to update
> software is an absolute bare minimum requirement for reasonably secure
> software. Otherwise, it's simply not possible to remediate vulnerabilities
> that will inevitably be found and exploited. So, if a straightforward update
> to introduce the pre-flight is not practical, then the software stands
> effectively no chance of being safe when exposed to hostile traffic.
>
>
>> Browser developers good visibility into intranets and other private
>> networks to find and understand problems. These are all indications that any
>> change to the default navigation, iframe embedding, or XHR behavior of web
>> browsers to mitigate the issues is likely to take many iterations, and thus
>> a lot of time, to get right. Thus, a parallel approach of outreach to device
>> makers and browser development makes the most sense.
>
>
> Yes, outreach can be a very useful tool, and I'm all for it. But, the
> proposal here provides a full guarantee of backwards compatibility via site
> opt-in, and I'd strongly encourage any browser implementors to support the
> same via configuration (as Chrome will). And the scope and impact we're
> talking about here is very comparable to the ongoing refinement of
> mixed-content handling. So, it's a problem we have a decent sense of and I
> don't see a reason to be unnecessarily pessimistic or prepare to leave users
> at risk any longer than necessary.
>
> Accepting that, a middle ground might be to encourage a softer phase-in. In
> the case of Chrome, we could potentially have the initial UI provide user
> overrides. So, the interstitial would allow a direct click-through and the
> sub-resource loads would use something like the mixed-content shield. And
> once we got to a point where we're comfortable that we've minimized
> conflicts, we could remove the overrides from the UI (similar to how we
> handled NPAPI deprecation).
>
>
>> tl;dr:
>> * Let's make sure that the makers of the products that we're trying to
>> help are actually involved in the discussion.
>> * Let's build an open source test suite that device makers can use to
>> improve their products.
>> * Let's document, more specifically and precisely, what security measures
>> router makers need to use to defend themselves against CSRF and other
>> attacks.
>> * Let's create a mockup router web UI, or modify an open source web UI, to
>> use as a reference implementation to help router makers.
>> * Let's derive and evaluate any spec for changing browser behavior from
>> the test suite.
>> * Let's recognize that there is a high risk of failure for changing
>> browser behavior and that changing browser behavior only helps to a limited
>> extent.
>> * Let's trade high fives all around when it's all done.
>
>
> Hopefully I've already addressed the bullets inline and high fives are still
> on the table.
>
>
>> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=354493
>>
>> Cheers
>> Brian
>> --
>> https://briansmith.org/
>>
>
Received on Saturday, 9 January 2016 03:22:08 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:17 UTC