W3C home > Mailing lists > Public > public-webappsec@w3.org > June 2014

Re: [MIX] Comments on draft Mixed Content spec

From: Brad Hill <hillbrad@gmail.com>
Date: Wed, 4 Jun 2014 07:44:06 -0700
Message-ID: <CAEeYn8jpCdEO4HbhSe+MZhr8ySd1zL488qnUNjb3WPqbdj+GXg@mail.gmail.com>
To: Mike West <mkwst@google.com>
Cc: Tanvi Vyas <tanvi@mozilla.com>, Michal Zalewski <lcamtuf@coredump.cx>, "public-webappsec@w3.org" <public-webappsec@w3.org>
Regarding forms - I hate to be the guy who seems like he's always pushing
back against https, but I wonder if this is a good idea.  It doesn't seem
like we intend to block linking or navigation to insecure resources.  How
is a form that different?  It seems like one possible consequence here is
that secure pages will fall back to constructing links with form parameters
on the GET string if they cannot POST to http, and this just makes the
problem worse by allowing that data now to leak into referrer strings,
server logs, etc.

I remember back in the day when browsers used to pop an "are you sure"
warning when you did a form submission to an insecure location from a
secure one.  I don't think many people left the "don't show this again" box
unchecked.

It seems there is a definite line crossed here between making sure the
current execution context is constructed from wholly secure parts, and to
adding (what in the current real architecture of the web are rather
severe!) restrictions on how the graph of the web may be constructed and
navigated.  I think the latter is at best premature and maybe should be
considered out of scope for MIX.

-Brad


On Wed, Jun 4, 2014 at 6:00 AM, Mike West <mkwst@google.com> wrote:

> On Wed, Jun 4, 2014 at 7:09 AM, Tanvi Vyas <tanvi@mozilla.com> wrote:
>
>>  Hi Mike,
>>
>> Thank you very much for putting this spec together!  I went through it
>> and have a few comments.
>>
>> 2.1 - localhost is included in the definition of assumed secure origin.
>> In Firefox's implementation of the MCB, we block localhost.  We have had a
>> number of requests to allow it, but Dan has responded with some good
>> arguments to continue blocking it:
>> https://bugzilla.mozilla.org/show_bug.cgi?id=844556#c58
>> https://bugzilla.mozilla.org/show_bug.cgi?id=873349#c30
>>
>
> This fits into Brian's suggestion that we wrap localhost, file, and
> intranet restrictions into the spec. I'll work on that.
>
>
>> 3.1 - Sandboxed iframes
>> There is some discussion here about whether sandboxed iframed should be
>> considered optionally blockable passive content or active content -
>> https://bugzilla.mozilla.org/show_bug.cgi?id=903211.  We may need to
>> block other allow-* values in addition to allow-top-level-navigation.
>>
>
> I'm quite open to coming up with some definition of what sandbox flags
> we'd need to define in order to consider a frame suitably passive, and I'm
> pretty sympathetic to the concerns raised in comment 6 on that bug. I'm not
> really sure that there's any sufficiently safe combination that would
> satisfy users, however (for example, see comment #3 of that bug).
>
> +lcamtuf, who has opinions on this topic.
>
>
>> Issue 2 & 3- Regarding form submissions, it is difficult to detect
>> whether the target of a form is secure until the submit button is actually
>> hit.  The form action may be a call to a javascript function.  Without
>> parsing through the function, the browser does not know whether or not the
>> intended destination is secure.  Determining whether an insecure form is
>> present and including UI to indicate this to the user (i.e. no green lock)
>> seems impossible.  Perhaps someone has an idea on how we can do this?
>>
>
> One suggestion would be to examine the action of forms that are on the
> page at parse-time. JavaScript could certainly add new and exciting
> vulnerabilities dynamically, but throwing a mixed-content warning (and
> disabling the form in some way?) if we see a known-bad form when parsing
> seems like a reasonable first step.
>
>
>> For Issue 2, Firefox presents a warning to the user before the data is
>> actually submitted (
>> http://people.mozilla.org/~tvyas/https_post_http_with_js.png).  This
>> warning has been around for many many years, so we may well be able to get
>> away with it without much of a web-compatability issue.  We can look into
>> the percentage of pages that hit this warning as a followup.
>>
>
> Data would be wonderful. Gimmie gimmie gimmie. :)
>
> I'll try to get a similar metric into Blink.
>
> 4.3 - " Note: It is *strongly recommended* that users take advantage of
>> such an option if provided."
>> I'm not sure if we want to strongly recommend that users disable
>> optionally blockable mixed passive content.  As stated elsewhere in the
>> spec, that means that ~43% of secure pages will not function properly.
>>
>
> ~43% of secure pages have some sort of mixed content (according to that
> study I cited). Whether or not they're "functioning properly" depends on
> what you mean by "properly" and "function". :) Worst case, a prominent
> image is broken (30% of secure sites, according to that study), or a
> video/audio is broken (no data in the study, so let's assume it's 0% :) ).
>
> If it was even remotely practical, I'd recommend that users block HTTP
> entirely. As it stands, recommending that users block mixed passive content
> seems pretty reasonable, even if we as browser vendors can't practically
> make that choice for them.
>
>
>> 5.1 - Algorithm.  As Anne has mentioned, the algorithm included in the
>> spec doesn't match the examples provided, but you are already planning to
>> modify this.
>>
>
> Which one? I've updated the algorithms a bit in the last day or so: are
> they still broken?
>
>
>> 7 - the word "defined" is repeated.
>>
>
> Thanks!
> https://github.com/w3c/webappsec/commit/c3bc367506b1e8a991a5d1c49510d3f5460c91d3
>
> -mike
>
Received on Wednesday, 4 June 2014 14:44:43 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:05 UTC