Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

Le 01/12/2015 20:41, Brad Hill a écrit :
>> As far as I see it, a "mixed content" has the word "content", which is
> supposed to designate something that can be included in a web page and
> therefore be dangerous.
> 
> "Mixed Content" (and "mixed content blocking") is a term of art that has
> been in use for many years in the browser community.  As such, you are
> correct that it is a bit inadequate in that it's coinage predates the
> widespread use of AJAX-type patterns, but we felt that it was better to
> continue to use and refine a well-known term than to introduce a new
> term. Apologies if this has created any confusion.
> 
> More than just "content", the question boils down to "what does seeing
> the lock promise the user?"  Browsers interpret the lock as a promise to
> the user that the web page / application they are currently interacting
> with is safe according to the threat model of TLS.  That is to say, it
> is protected end-to-end on the network path against attempts to
> impersonate, eavesdrop or modify traffic.  In a modern browser, this
> includes not just fetches of images and script done declaratively in
> HTML, but any kind of potentially insecure network communication.
> 

Then you should follow your rules and apply this policy to WebRTC, ie
allow WebRTC to work only with http.

> In general, we've seen this kind of argument before for building secure
> protocols at layer 7 on top of insecure transports - notably Netflix's
> "Message Security Layer".  Thus far, no browser vendors have been
> convinced that these are good ideas, that there is a way to safely allow
> their use in TLS-protected contexts, or that building these systems with
> existing TLS primitives is not an adequate solution.

Netflix's MSL doc is not clear and just mentions an issue with http (not
ws) inside https, probably they have the very same problem with non
valid certificates.

Browser vendors are convinced that http with ws is better than https
with ws...

Could you please provide an example of serious issue regarding https
with ws?

> 
> In specific, I don't think you'll ever find support for treating Tor
> traffic that is subject to interception and modification after it leaves
> an exit node as equivalent to HTTPS

??? Do you really know the Tor protocol?

And where did you see this in the use cases (something that exits)?

If something has to exit Tor it's obvious that https must be used, the
exit node is by design a potential mitm.

But https inside the Tor protocol.

Using wss to send the Tor traffic does not change anything to this
situation.

This contradiction that you highlight here just shows again that the
current rules are not logical.

, especially since we know there are
> active attacks being mounted against this traffic on a regular basis.
>  (This is why I suggested .onion sites as potentially secure contexts,
> which do not suffer from the same exposure outside of the Tor network.)

Same as above, that's the third or fourth time in this thread that I am
talked about the fb hidden service and its .onion certificate, supposed
to improve some security while it does not (or a very little in case fb
hidden service is not located with the Tor server, but that's more a fb
configuration issue)

That's another proof that https does not improve anything here.

While browsing hidden services could be a use case in the future (once
the "browsing paradox" is solved), I am not talking about this for now,
I am talking about using the Tor protocol or another secure protocole
for multiple services.


> 
> -Brad
> 
> On Tue, Dec 1, 2015 at 5:42 AM Aymeric Vitte <vitteaymeric@gmail.com
> <mailto:vitteaymeric@gmail.com>> wrote:
> 
> 
> 
>     Le 01/12/2015 05:31, Brad Hill a écrit :
>     > Let's keep this discussion civil, please.
> 
>     Maybe some wording was a little tough below, apologies for this, the
>     logjam attack is difficult to swallow, how something that is supposed to
>     protect forward secrecy can do quietly the very contrary without even
>     having the keys compromised, it's difficult to understand too why TLS
>     did not implement a mechanism to protect the DH client public key.
> 
>     >
>     > The reasons behind blocking of non-secure WebSocket connections from
>     > secure contexts are laid out in the following document:
>     >
>     > http://www.w3.org/TR/mixed-content/
>     >
>     > A plaintext ws:// connection does not meet the requirements of
>     > authentication, encryption and integrity, so far as the user agent is
>     > able to tell, so it cannot allow it.
> 
>     The spec just mentions to align the behavior of ws to xhr, fetch and
>     eventsource without giving more reasons, or I missed them.
> 
>     Let's concentrate on ws here.
> 
>     As far as I see it, a "mixed content" has the word "content", which is
>     supposed to designate something that can be included in a web page and
>     therefore be dangerous.
> 
>     WS cannot include anything in a page by itself, it is designed to
>     discuss with external entities, for other purposes than fetching
>     resources (images, js, etc) from web servers that are logically tied to
>     a domain, in that case you can use xhr or other fetching means instead.
> 
>     Therefore it is logical to envision that those external entities used
>     with WS are not necessarily web servers and might not have valid
>     certificates.
> 
>     WS cannot hurt anything unless the application decides to insert the
>     results in the page, which is not something problematic with WS only,
>     the application is loaded via https, so is supposed to be secure but if
>     it is doing wrong things nothing can save you, even wss.
> 
>     Unlike usual fetching means, in case of malicious use WS cannot really
>     hurt like scaning URLs, this is a specific protocol not talked by
>     anybody.
> 
>     As a result of the current policy, if we want to establish WS with
>     entities that can't have a valid certificate, we must load the code via
>     http which is obviously completely insecure.
> 
>     So forbiding WS with https is just putting the users at risk and prevent
>     any current or future uses of ws with entities that can't have a valid
>     certificate, therefore reducing the interest and potential of ws to
>     something very small.
> 
> 
>     >
>     > If there is a plausible mechanism by which browsers could distinguish
>     > external communications which meet the necessary security criteria
>     using
>     > protocols other than TLS or authentication other than from the Web
>     PKI,
>     > there is a reasonable case to be made that such could be considered as
>     > potentially secure origins and URLs.  (as has been done to some extent
>     > for WebRTC, as you have already noted)
>     >
> 
>     To some extent yes... maybe some other solutions could be studied via
>     something like letsencrypt to get automatically something like temporary
>     valid certificates (which then might eliminate the main topic of this
>     discussion if feasible)
> 
> 
>     > If you want to continue this discussion here, please:
>     >
>     > 1) State your use cases clearly for those on this list who do not
>     > already know them.  You want to "use the Tor protocol" over
>     websockets?
> 
>     Note: I am not affiliated at all to the Tor project
> 
>     Yes, that's already a reality with projects such as Peersm and
>     Flashproxy.
> 
>     Peersm has the onion proxy function inside browsers which establishes
>     Tor circuits with Tor nodes using WS.
> 
>     Flashproxy connects a censored Tor user to a Tor node and relays the Tor
>     protocol using WS between both.
> 
>     > To connect to what?  Why?
> 
>     The Tor protocol is just an example, let's see it as another secure
>     protocol.
> 
>     If we go further, we can imagine what I described here:
>     https://mailman.stanford.edu/pipermail/liberationtech/2015-November/015680.html
> 
>     Which mentions the "browsing paradox" too.
> 
>     Not usual I believe (last paragraph, the idea is not to proxy only to
>     URLs as it is today but to proxy to interfaces, such as ws,xhr and
>     WebRTC), but this will happen one day (maybe I should patent this
>     too...)
> 
>     Applications are numerous and not restricted to those examples.
> 
>      Why is it important to bootstrap an
>     > application like this over regular http(s) instead of, for example, as
>     > an extension or modified user-agent like TBB?
> 
>     The Tor Browser is designed for secure browsing, because for example
>     solving the "browsing paradox" and having the onion proxy inside
>     browsers is not enough to insure secure anonymous browsing, Tor
>     Browser's features will still be required.
> 
>     Some applications need some modifications of the browser, some others
>     need extensions but plenty of applications could be installed from the
>     web sites directly.
> 
>     The obvious advantages are: no installation, works on any
>     device/platform, no dev/maintenance of the application for different
>     platforms with the associated risks (sw integrity) and complexity for
>     the users (not talking here about the problematic aspect of code loading
>     for a web app, that's not the subject, but for sure this must happen
>     over https, not http).
> 
>     For example, a usual means to disseminate Flashproxy is to install an
>     iframe on many sites, if I install a Flashproxy iframe on my web site,
>     the users navigating on this web site will relay the traffic for
>     censored people.
> 
>     But I think we agree that iframes are not the future, I could install a
>     tag/component on my web site representing the Flashproxy app, when a
>     user navigates on my site it could be asked if he wants to install the
>     flashproxy app, which then would be extracted from the original
>     Flashproxy site and could run in background in a service worker, so the
>     user will continue to relay data after he left the site or after he
>     closed his browser.
> 
>     Same thing for uProxy, Peersm, etc.
> 
>     >
>     > 2) Describe clearly why and how the protocol you propose to use meets
>     > the necessary guarantees a user expects from an https page.
>     >
> 
>     If we take the Tor protocol, by design it cannot really authenticate
>     (but fulfills the other requirements) the corresponding party, it can
>     just check that it has established a connection with someone that has
>     indeed the certificate used for the TLS connection and that is known as
>     a valid node by the network, who can be anybody.
> 
>     Anybody could invent another protocol that would work with ws and
>     fulfill the mixed content requirements, or not, but again what does it
>     hurt? If the application has decided to do stupid things then even wss
>     cannot protect.
> 
>     If ws is intercepted then by design this is not supposed to be a problem
>     if the protocol used is secure, and if it's not secure then don't use
>     this app.
> 
> 
>     > 3) Describe clearly how the user agent can determine, before any
>     > degradation in the security state of the context is possible, that
>     only
>     > a protocol meeting these requirements will be used.
>     >
> 
>     That seems difficult, unless the browsers has the modules built-in or
>     can validate the modules allowed, a bit like extensions but for web
>     apps, but this looks restrictive.
> 
>     But, again, is it really required for WS?
> 
> 
>     > Ad-hominem and security nihilism of the forms "TLS / PKI is
>     worthless so
>     > why bother trying to enforce any security guarantees" or "other
>     insecure
>     > configurations like starting with http are allowed, so why not allow
>     > this insecure configuration, too" are not appropriate or a good use of
>     > anyone's time on this list.  Please refrain from continuing down these
>     > paths.
>     >
>     > thank you,
>     >
>     > Brad Hill, as co-chair
>     >
>     > On Mon, Nov 30, 2015 at 6:25 PM Florian Bösch <pyalot@gmail.com
>     <mailto:pyalot@gmail.com>
>     > <mailto:pyalot@gmail.com <mailto:pyalot@gmail.com>>> wrote:
>     >
>     >     On Mon, Nov 30, 2015 at 10:45 PM, Richard Barnes
>     >     <rbarnes@mozilla.com <mailto:rbarnes@mozilla.com>
>     <mailto:rbarnes@mozilla.com <mailto:rbarnes@mozilla.com>>> wrote:
>     >
>     >         1. Authentication: You know that you're talking to who you
>     think
>     >         you're talking to.
>     >
>     >
>     >     And then Dell installs a their own root authority on machines they
>     >     ship, or your CA of choice gets pwn'ed or the NSA uses some
>     >     undisclosed backdoor in the EC they managed to smuggle into the
>     >     constants, or somebody combines a DNS poison/grab with a non
>     >     verified (because piss poor CA) double certificate, or you hit one
>     >     of the myriad of bugs that've plaqued TLS implementations
>     >     (particularly certain large and complex ones that're basically one
>     >     big ball of gnud which shall remain unnamed).
>     >
> 
>     --
>     Get the torrent dynamic blocklist: http://peersm.com/getblocklist
>     Check the 10 M passwords list: http://peersm.com/findmyass
>     Anti-spies and private torrents, dynamic blocklist:
>     http://torrent-live.org
>     Peersm : http://www.peersm.com
>     torrent-live: https://github.com/Ayms/torrent-live
>     node-Tor : https://www.github.com/Ayms/node-Tor
>     GitHub : https://www.github.com/Ayms
> 

-- 
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms

Received on Wednesday, 2 December 2015 11:50:33 UTC