- From: Nicolas Mailhot <nicolas.mailhot@laposte.net>
- Date: Mon, 23 Jun 2014 10:31:20 +0200
- To: "William Chan (陈智昌)" <willchan@chromium.org>
- Cc: "Nicolas Mailhot" <nicolas.mailhot@laposte.net>, "Peter Lepeska" <bizzbyster@gmail.com>, "HTTP Working Group" <ietf-http-wg@w3.org>, "Martin Thomson" <martin.thomson@gmail.com>
Le Jeu 19 juin 2014 21:33, William Chan (陈智昌) a écrit : > That said, I do not agree with your assertion that referencing content > from > a third party is equivalent to being willing to trust a proxy. I am > frankly > surprised to hear such a statement. Does anyone else agree with this? I think you need to step back a little and get some perspective: 1. first, trust is not a mathematical concept it's a human evaluation of the behaviour of other humans (trust does not apply to dead matter or automatons, it applies to the people that use them for a particular purpose). Since humans are not perfect, trust is fundamentally not transitive. You can try to limit the scope of the trust to the maximum to limit trust loss during delegation but that's all you can do: limit loss. That is why any serious security policy will use MAC and not DAC. That is why despite iron-clad contracts Boeing had to reinternalise production to rein in the effects of chain subcontracting (and it was *not* a new discovery every decade beancounters manage to convince a big organisation that the sanity rules again deep subcontracting can be lifted). That is why even with perfect poll processes, if you let your elected representatives elect another layer or representatives and so on it does not take many steps before the resulting assembly makes decisions contrary to the original electing population (some will say even one level of subdelegation is too much). This is why ponzi schemes are so effective. Trust is not transitive and the more delegations you accept the less trust remains irrespective of technical measures 2. second, you've still not internalized that on the Internet trust is a three-way concept (site, network and user) and you're still evaluating trust from the site owner perspective ignoring totally other actors. This is why you consider a checksum (!) an appropriate trust safegard when you would never admit it as sufficient for other forms of trust. >From a user point of view a clearly identified local operator they can easily reach physically and which operates on local laws will often be more trustable than a random difficult-to identify site on the other side of the world (that, as shown again and again, will decline any obligation under local laws when put to trial). 3. third, cdns and especially js cdns are the ultimate scammer dream: they're more anonymous than PO boxes (usually computer-generated ids on shared platforms you have to trust as a whole with no way to identify any responsible party), they're outside any effective juridiction, they execute on the marks' systems with direct access to the marks' info using the marks' computing power. (and no one cares: not the delegator – it's no on his site, nor the delegatee – it's not his users) cdns have a long way to go before being remotely trustable from a user point of view and they have zero incentive to try especially in a http2 world where all power is delegated to the calling site. Frankly, if you forget the quick money angle, a delegating web site is about as (or usually less) trusted as a mixed content web site. And if you accept to use a site that delegates anything to 666.cloud.com for taxes or banking you deserve what you will get. If browsers were serious about their user interests they would severely restrict the number and depth of delegations on any given page, and show an untrusted warning any time the delegation complexity went over a very low level, instead of trying to enable more delegation. -- Nicolas Mailhot
Received on Monday, 23 June 2014 08:32:08 UTC