Re: Comments on Mixed Content

Pinging this back up after the holiday break. Hi David!

--
Mike West <mkwst@google.com>, @mikewest

Google Germany GmbH, Dienerstrasse 12, 80331 München,
Germany, Registergericht und -nummer: Hamburg, HRB 86891, Sitz der
Gesellschaft: Hamburg, Geschäftsführer: Graham Law, Christine Elizabeth
Flores
(Sorry; I'm legally required to add this exciting detail to emails. Bleh.)

On Thu, Dec 11, 2014 at 12:51 PM, Mike West <mkwst@google.com> wrote:

> Thanks for the feedback, David! I'll address the individual points inline.
>
> On Wed, Dec 10, 2014 at 2:32 PM, David Walp <David.Walp@microsoft.com>
> wrote:
>
>>  1) Section 2.2, TLS-protected & Weakly TLS-protected (and throughout
>> the spec).
>>
>>  There appears to be an assumption the only environment is the internet
>> and that intranet environments are not addressed.   We think this would be
>> addressed by adding wording in section 2.2 that stated User agents are free
>> to interpret protection with in a trusted environment.
>>
>
> I share Chris' skepticism of such wording. The NSA's infamous "SSL added
> and removed here" slide should make it quite clear that intranets are a)
> targets, and b) not as secure as we'd all like them to be.
>
> Additionally, intranets are free to use insecure transport if they choose
> to. `http://intranet/` <http://intranet/> would not trigger mixed content
> warnings (and would also be clearly insecure). If an intranet chooses to
> use secure transport, we should not reduce their expectations of security
> based simply upon hostname or IP address.
>
>
>>  2) Section 3.2, "Plugin data" in bulleted list.
>>
>>  Our assumption is blocking (or not blocking) of the plugin data is the
>> responsibility of the plugin.   Correct?
>>
>
> That likely depends on the implementation. In Chrome, the PPAPI version of
> Flash sends at least most of its traffic through Chrome's network stack,
> which means we can make some progress in terms of blocking mixed requests
> without relying on the plugin to do so for us.
>
> It's entirely possible that other implementations will need to rely on the
> plugin itself to do blocking (as Chrome and other UAs do with Flash and
> Incognito mode today).
>
>
>>  3) Section 3.2, "cspreport" in sentence starting with "These resource
>> types map to the following Fetch request contexts:".
>>
>>  Our concern is that using cspreport is a valid method to find mix
>> content.  Is there another specified method to find mix content?
>>
>
> CSP is a great way to discover mixed content! However, this document
> requires that your CSP endpoint must be secure if the page generating
> reports is secure. This seems like a reasonable restriction to ensure the
> privacy properties of HTTPS are kept intact.
>
>  4) Section 4.1,  twice there is the text "return a synthetically
>> generated network error response".
>>
>>  The statement " return a synthetically generated network error
>> response" doesn’t reflect the goal of the requirement to us.  We think the
>> statement is related to the need to return network error to script on the
>> web page because of mixed content.  Please can we get some clarification
>> about the requirement behind this text.
>>
>
> First, note that section 4 has been more or less dropped entirely in the
> latest editor's draft, in response to some of Brian Smith's feedback from a
> week or three ago.
>
> Second, that phrase is pointing to Fetch's concept of a "network error"
> (see https://fetch.spec.whatwg.org/#concept-network-error). The goal is
> to ensure that the Fetch process doesn't return (or request!) actual data
> from the network, but instead pretends that a network error occurred, and
> deal with the request just as though someone had yanked out your ethernet
> cable during the request. See step #4 of
> https://fetch.spec.whatwg.org/#fetching for details of how this works.
>
> I think the intent is fairly straightforward, but I'm happy to consider
> suggestions for a phrasing that would be less confusing.
>
>  5) Section 4.1, list items #1 and #3.
>>
>>  Why is there an inconsistency in the error handing mechanism between #1
>> (XHR) and #3 (Websockets)?
>>
>
> WebSockets currently throw if the secure flag is false but the calling
> origin is secure (see step #2 of
> http://www.w3.org/TR/2012/CR-websockets-20120920/#the-websocket-interface.
>
> Anne (CC'd) convinced me that changing XHR to do the same would be a bad
> idea from a compatibility perspective. However, given that WebSockets is
> already throwing, and has been for years, it seems reasonable to simply
> update it's language to match this specification and current concepts (note
> that "entry script" no longer exists).
>
>
>>  6) Section 5.1, Example 4.
>>
>>  We would like to understand the rationale behind this example.  Given
>> a.com is already unsecure, how is user to understand the iframe with
>> b.com is different (aka secure).
>>
>
> I agree with you that the end-user can't be expected to understand or
> distinguish between portions of a page which are framed or native. I don't
> believe that user expectations are the only expectations we need to care
> about.
>
> In particular, it seems unwise to allow a site's security properties to be
> changed based on the context in which it is loaded. If I serve a site over
> HTTPS, that ought to give me certain expectations for behavior, and allow
> me to assume certain limitations.
>
> Consider, for example, a page on `b.com` which mistakenly attempts to
> load script from an insecure source. That script should be blocked, for `
> b.com`'s protection, regardless of the page's ancestry.
>
>  7) Section 5.1, Example 5 - "even though the framed data URL was not".
>>
>>  We believe the text "even though the framed data URL was not" is
>> incomplete.  Our opinion is that data URL should be treated the same as the
>> web page that contains the data URL.
>>
>
> `data:` URLs usually aren't delivered over a secure connection. Typically,
> they're (probably) synthesized by JavaScript and injected into a page. This
> could potentially even happen from a child frame (`window.parent.frames`)
> or from an ancillary browsing context (`window.opener.frames`).
>
> In this case, there's no meaningful distinction: we treat the frame as
> secure if one or more of its ancestors is secure, and as insecure otherwise.
>
> How would you suggest that we change the phrasing of this example,
> assuming you agree with the conclusion that the insecure request to `
> http://evil.com/` <http://evil.com/> ought to be blocked?
>
>  8) Sections 5.2  &  7 - What about legacy - XHR?.
>>
>>  Sections 5.3 & 7 both address Fetch implementations but there are not
>> similar sections for XHR.  Given the current wide adoption of XHR, why are
>> similar sections about XHR not needed?
>>
>
> Fetch defines the mechanisms that XHR uses in order to go out to the
> network, grab data, and return it to the client. In other words, XHR is
> layered on top of Fetch, so altering Fetch to support mixed content checks
> implicitly alters XHR as well.
>
>  9)  Section 5.2.
>>
>>  We believe that examples at the end of section 5.2 (like section 5.1)
>> would be very useful and add clarity.
>>
>
> Noted. I'll add an example or two here and in section 5.3.
>
> Thanks again, this is helpful!
>
> --
> Mike West <mkwst@google.com>, @mikewest
>
> Google Germany GmbH, Dienerstrasse 12, 80331 München,
> Germany, Registergericht und -nummer: Hamburg, HRB 86891, Sitz der
> Gesellschaft: Hamburg, Geschäftsführer: Graham Law, Christine Elizabeth
> Flores
> (Sorry; I'm legally required to add this exciting detail to emails. Bleh.)
>

Received on Thursday, 8 January 2015 11:21:46 UTC