Re: Mandatory encryption

Content delivery is an example of a case where TLS is decidedly
sub-optimal as a security solution.

If I am streaming a HD movie of 2Gb or so, I am going to want to
encrypt it once and deliver it without TLS. That is more secure as it
avoids giving the attacker multiple ciphertexts of the same plaintext
under different keys.

Mandating TLS in HTTP means shutting the door on other security
solutions that may be more appropriate. That isn't acceptable.


Nor is pushing the hard stuff onto 'the operating system'. Many HTTP
implementations run on machines that don't have an O/S at all. In fact
HTTP is the most complicated part of the system. That is not
coincidence. It was a design requirement at CERN before HTTP was even
proposed in IETF.

On Wed, Jul 18, 2012 at 1:27 AM, Mark Watson <watsonm@netflix.com> wrote:
> The problems with HTTP that HTTP/2.0 is intended to address seem to have
> nothing to do with security. Adding a requirement for use of TLS is
> unrelated to the purpose of the new protocol and will restrict its usage.
> See
> http://groups.csail.mit.edu/ana/Publications/PubPDFs/Tussle%20in%20Cyberspace%20Defining%20Tomorrows%20Internet%202005's%20Internet.pdf
> for why we should design protocols which flex along lines of controversy.
> This question of mandating TLS seems to go directly against that advice.
>
> There are many services which make use of HTTP without TLS today that may
> find it more difficult to migrate to HTTP/2.0 if they also have to change
> their security approach as well. The benefits of HTTP/2,0, whilst they may
> be substantial, are likely not sufficient to be "worth the price" of such a
> change. As a specific example, Netflix ships quite a lot of traffic over
> HTTP without TLS, yet our service remains highly secure. Where we do use TLS
> we find significant problems with it - switching to TLS for all traffic
> would be an order-of-magnitude bigger deal (on the minus side) than the
> benefits of HTTP/2.0.
>
> …Mark
>
>
>
> On Jul 17, 2012, at 9:50 PM, Phillip Hallam-Baker wrote:
>
> Perhaps you are not aware that I work for a CA and I have spent 20
> years working on Web security, most of it developing the CA industry
> and PKI infrastructure.
>
> Maintaining the policies for accepting trust roots is essentially a
> full time job for each of the parties doing it. It is not just a
> matter of checking that a CA has a valid audit. The root manager has
> to determine that the audit actually relates to the root of trust to
> be included and that the practices adhered to meet the necessary
> inclusion criteria.
>
>
> Unlike a coding task, maintaining a root store is an ongoing
> commitment. It is not something that you can do once and forget.
>
>
>
> On Tue, Jul 17, 2012 at 11:51 PM, Mike Belshe <mike@belshe.com> wrote:
>
>
>
> On Tue, Jul 17, 2012 at 8:32 PM, Phillip Hallam-Baker <hallam@gmail.com>
>
> wrote:
>
>
> Umm pretty much every Web Server has SSL support, the issue is that
>
> only about 2% of deployments turn it on. Or is the idea that we are
>
> going to mandate turning it on? If so, who is going to define the
>
> trust criteria for accepting certs?
>
>
>
> Browsers already have well documented policies for this.  Major OSes also
>
> have their own policies (MacOS, Windows).
>
>
>
>
> I am a big supporter of TLS. I just don't see anything good coming
>
> from a mandate that is superfluous.
>
>
> As an example, we have had a mandate in PKIX to check CRLs and/or OCSP
>
> for over a decade and to reject a certificate if the client cannot
>
> perform validation. Most browsers try to check but accept the
>
> certificate if there is no response to the OCSP request. So virtually
>
> every deployed browser has been out of compliance with a fundamental
>
> PKIX control for over a decade despite repeated attempts by the CAs to
>
> persuade the providers to change this.
>
>
> I can't see how DANE is going to solve anything either since DANE
>
> poses an even more disruptive hard-fail criteria.
>
>
> I don't want to tie an application layer protocol version to a
>
> transport layer protocol version or vice versa. If you tie HTTP 2.0 to
>
> TLS 1.2 then you are going to have to revise HTTP when you revise TLS.
>
>
>
> This is not true.  HTTP did not change when we went from earlier versions of
>
> TLS to the current versions.  Browsers do support older versions of TLS,
>
> usually for older servers that haven't upgraded.  Even SSL3 - shudder.
>
>
>
>
>
> I don't see any value in a Canute/sea act here.
>
>
>
> What would be valuable is to have a suite of standards for a specific
>
> application that could be identified as a set. Something like 'best
>
> practices for Web server hosting, IPv6 +HTTP 2.0 + TLS 1.2 with
>
> AES+SHA2 + DNSSEC'. It would also be nice to see a similar draft for
>
> best practices for email clients and servers, and yes, support for
>
> STARTTLS would be high on my list of requirements. A draft like that
>
> would be very useful as a contract term for outsourcing Web hosting or
>
> mail service.
>
>
> But that is a totally different prospect to trying to tie HTTP to a
>
> particular security solution.
>
>
>
> Implementing TLS in a product is far from trivial. Getting the code is
>
> easy, selecting trust anchors is not. Is the specification going to
>
> mandate a particular choice of trust anchor? that is not going to
>
> happen. Nor is defining a minimum Certification Policy or locking the
>
> whole HTTP trust infrastructure to the ICANN PKI root by trying to
>
> mandate DNSSEC and DANE as well as TLS.
>
>
> TLS requires a PKI to work and every PKI comes with a set of policy
>
> and legal questions that have to be understood if the scheme is going
>
> to provide any security.
>
>
>
> Users expect privacy and security.  We've all seen the legislation around
>
> the globe for stronger security and privacy options.  SSL won't fix
>
> everything, I know, but it is a solid step, and its the responsible step for
>
> the protocol.  I don't understand how we can argue that HTTP/2.0 could be a
>
> protocol for the next 20 years if it is sniffable over the wire.
>
>
> Go talk to non-techie users about whether HTTP is secure.  They assume it
>
> is.  Ask them if they would prefer a secure protocol or an insecure one.
>
> Ask them if they think protocols should be eavesdroppable so that other
>
> people in the cafe can see what they're doing, steal their passwords, and
>
> more?  I have yet to find any of these users that want this flimsy level of
>
> security.
>
>
> So- the problem we're solving here is to make users safer and raise the bar
>
> on web security globally.
>
>
> Mike
>
>
>
>
>
>
> On Tue, Jul 17, 2012 at 10:30 PM, Mike Belshe <mike@belshe.com> wrote:
>
> Mandatory SSL is +1 and very forward thinking.
>
>
> On Tue, Jul 17, 2012 at 6:22 PM, Phillip Hallam-Baker <hallam@gmail.com>
>
> wrote:
>
>
> -1
>
>
> I don't want to have a mandatory requirement unless it is going to
>
> change behavior.
>
>
>
> I don't think we can change behavior with protocols.  All we can do is
>
> offer
>
> new features.  If the features are compelling, people will upgrade.  If
>
> the
>
> features are not compelling, they won't.
>
>
> People used to tell me SPDY would never get people to "upgrade".  Even
>
> after
>
> touching a half a billion users, people still tell me that.  I think the
>
> evidence of adoption speaks for itself.
>
>
>
>
> We already have ubiquitous deployment of TLS in browsers. The code is
>
> freely available, everyone knows the benefit.
>
>
> The only HTTP servers or clients I am aware of that don't have TLS
>
> support are either toolsets that the provider expects to be used with
>
> OpenSSL or the like and embedded systems.
>
>
>
> I'll ask the google crawler guys to weigh in on this.  They have pretty
>
> good
>
> stats.  I believe your assertion is provably false.
>
>
>
>
> Incidentally, suport for IPSEC is mandatory in IPv6 but that does not
>
> seem to do any good either. It just means that IPv6 is harder to
>
> deploy as implementations are required to support a security layer
>
> almost nobody uses as TLS has proved better.
>
>
>
> Making TLS a mandatory requirement seems like a feelgood approach to
>
> security to me. Instead of doing something useful, we pass a
>
> resolution telling people to do what they plan to do anyway.
>
>
>
> You imply there is something else that would be useful - what would it
>
> be?
>
> (don't feel obliged to answer :-)
>
>
> To me mandating security is a great first step.  Nobody should think
>
> this
>
> 'fixes' security. But if we believe the net ever needs to be secure, we
>
> need
>
> to start taking steps toward that.
>
>
> Mandating SSL is a simple step we can take which solves most of the
>
> eavesdropping problem right now.  But more importantly, it poises us to
>
> address the next set of security issues, including CA/verification
>
> problems,
>
> distribution of video over ssl, handshake latency, etc.  Until we start
>
> trying to be secure, of course we'll never be secure.
>
>
> Mike
>
>
>
>
>
> On Tue, Jul 17, 2012 at 8:51 PM, Paul Hoffman <paul.hoffman@gmail.com>
>
> wrote:
>
> +1 to what seems to be a lot of developers: make TLS mandatory.
>
>
> so, even when used in an internal application protocol, it's going
>
> to
>
> be end to end
>
> encrypted to make it super hard to debug?
>
>
> In an internal application protocol, why would it be "super hard to
>
> debug"? The client can do an HTTP dump before TLS, the server can do
>
> an HTTP dump after TLS; either of the sides could debug the TLS.
>
>
> http is about more than users using
>
> web browsers.
>
>
> Completely true, and not relevant. Insecure HTTP for non-browser
>
> applications still has the same bad properties, no?
>
>
>
>
>
> --
>
> Website: http://hallambaker.com/
>
>
>
>
>
>
> --
>
> Website: http://hallambaker.com/
>
>
>
>
>
>
> --
> Website: http://hallambaker.com/
>
>
>



-- 
Website: http://hallambaker.com/

Received on Wednesday, 18 July 2012 11:30:21 UTC