Re: I revised the pro/contra document

Hiya,

On 11/24/2013 08:03 PM, Adrien de Croy wrote:
> 
> Hi Stephen
> 
> it's not clear to me in your last para about attack proxies.
> 
> Are you saying that we should only be able to scan http, and not https?

I don't know to be honest. I'd say that'd be part of figuring
out the non-trivial answers as to how to handle proxies in
HTTP for which I think Mark has an open issue in the tracker.

> Or that we need to fix things so we can scan https without
> 
> a) breaking TLS
> b) deploying MITM.

What I've been saying (repeatedly, sorry:-) is that if a
solution for inbound malware scanning or similar is developed
for HTTP, then that needs to be done without breaking TLS, and
that standardising a generic MITM attack on TLS would mean
breaking TLS, which is used by many more protocols than just
HTTP.

S.

> 
> Adrien
> 
> 
> ------ Original Message ------
> From: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>
> To: "Mike Belshe" <mike@belshe.com>; "Yoav Nir" <synp71@live.com>
> Cc: "Tim Bray" <tbray@textuality.com>; "Mike Bishop"
> <Michael.Bishop@microsoft.com>; "Alexandre Anzala-Yamajako"
> <anzalaya@gmail.com>; "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
> Sent: 25/11/2013 3:15:34 a.m.
> Subject: Re: I revised the pro/contra document
>>
>>
>> A general point - *please* don't talk about adding "privacy"
>> to HTTP - we're discussing confidentiality and server-auth.
>> Making confidentiality or both be the default for HTTP/2.0
>> will be a wonderful thing IMO. But the end result of doing
>> that is just not "privacy" and folks sloppily arguing that
>> you do get privacy as a result are just damaging the argument
>> to get confidentiality by default.
>>
>> On 11/24/2013 10:05 AM, Mike Belshe wrote:
>>>  On Sun, Nov 24, 2013 at 1:00 AM, Yoav Nir <synp71@live.com> wrote:
>>>
>>>>   On 24/11/13 12:53 AM, Mike Belshe wrote:
>>>>
>>>>   Alexandre -
>>>>
>>>>   Your question is a reasonable one. And personally, I agree with Tim,
>>>>  that TLS is useful for all data transmitted over HTTP.
>>>>
>>>>   In the case of printers, TLS is absolutely suitable, and in fact
>>>>  employed today at most fortune-500 companies. Imagine employees
>>>> being able
>>>>  to snoop insider information as it flows over wifi to the printers?
>>>>   Corporate IT guys long ago fixed this problem. Even for home use,
>>>> do you
>>>>  want your neighbor to steal your taxes as you print them?
>>>>
>>>>  My home does not have the IT department of a Fortune 500 company.
>>>> I'm sort
>>>>  of a techie guy, so I can probably figure out the web interface of a
>>>>  network-attached printer. But how can I get a certificate for my
>>>> printer? I
>>>>  know you believe that the tools will magically appear to make
>>>> certificate
>>>>  management easy. But if the numbers stated are correct, that 30% of
>>>>  websites already have valid certificates, then the growth of HTTPS
>>>> will not
>>>>  be enough to make tools where none existed before.
>>>>
>>>
>>>  I doubt you would bother to install a valid certificate; you'd just
>>> use a
>>>  self-signed cert or something, and because you've said you don't
>>> care about
>>>  it, you'd ignore the warnings.
>>
>> First Yoav is entirely correct that getting a certificate that chains
>> up to a browser-trusted root is not doable for printers except within
>> enterprises that install their own roots into employee browsers. That
>> nicely shows up the benefit of the http:// URI via opportunistic TLS
>> in HTTP/2.0 approach again. That's the approach we should take. It
>> needs no new PKI tooling or admin to solve a problem that we have
>> repeatedly failed to solve over the last 20 years. The "problem" I
>> mean there is a PKI enrollment setup that just works well everywhere.
>> There isn't one. Not unless you completely re-do the CA business
>> model. And assuming that would not be a good thing when designing
>> HTTP/2.0. (The CA business model will evolve sure, and maybe in a
>> good direction, but we cannot assume that.)
>>
>> I find Mike's response very odd. Browsers have been adding UI to make
>> SSC's (self-signed certs) less usable for years. Now that's not an
>> easy thing to get right, since there are many web sites (e.g. one of
>> mine [1]) that use SSCs for reasonable purposes, but once you have
>> users accepting SSCs (as studies show does happen) then the
>> server-auth aspect of TLS is damaged. (In the case of [1] I'm fine
>> that it cause browser barfs since I use that to mess with students'
>> heads. And I did have a good reason to turn on confidentiality
>> since I used to host some kids-football-team contact info there.
>> I just have never had a good enough reason to deal with a real PKI
>> for it.)
>>
>>    [1] https://down.dsg.cs.tcd.ie/
>>
>>>>
>>>>   For media content, TLS is also appropriate.
>>>>
>>>>   Mike Bishop brought up the DRM case, which is fine, but somewhat
>>>>  orthogonal to HTTP. You see, DRM is protecting *data at rest* (e.g.
>>>>  stored on a disk somewhere) rather than *data-in-motion* (e.g. being
>>>>  transmitted over HTTP. There are many interesting facets to
>>>> data-at-rest
>>>>  protections. But they out of scope for HTTP. HTTP is a transport
>>>> protocol
>>>>  and only deals with data-in-motion. It is worth elaborating on Mike's
>>>>  example, however, because he didn't mention that transmitting the
>>>>  DRM-protected data over HTTP contains metadata (e.g. the name of
>>>> the video,
>>>>  keywords, authors, people, links, etc) which can also be sensitive
>>>> itself.
>>>>   Should your neighbor know that you're watching porn right now? Should
>>>>  your neighbor know that you're researching <insert sensitive topic
>>>> here>?
>>>>   When discussing HTTP, we should be talking about data in motion
>>>> only, and
>>>>  even encoded content, like DRM content still benefits from TLS.
>>>>
>>>>   Ok, so when it comes to data in motion, how do we decide if TLS is
>>>>  useful?
>>
>> Here, Mike is entirely correct. The network stack cannot know when
>> the payload or meta-data are sensitive so the only approach that
>> makes sense is to encrypt the lot to the extent that that is practical.
>> Snowdonia should be evidence enough for that approach even for those
>> who previously doubted pervasive monitoring.
>>
>> HTTP is used for lots of sensitive data all the time in places that
>> don't use https:// URIs today. Sensitive data doesn't require any
>> life or death argument, it can be nicely mundane, e.g. a doctor
>> visit being the example Alissa used in the plenary in Vancouver.
>>
>> We now can, and just should, fix that. There's no hyperbole needed
>> to make that argument compelling.
>>
>>>>   First let's consider which parties are involved in a HTTP
>>>> transaction.
>>>>   There are:
>>>>      a) the user
>>>>      b) the server
>>>>      c) the middleware in between
>>>>
>>>>   The user, of course, expects his data to be sent privately. We don't
>>>>  want random others to view our communications. The only downside
>>>> would be
>>>>  if there were side effects to transmitting privately. For instance,
>>>> if my
>>>>  network were slower, or it cost more, etc - some users would be
>>>> willing to
>>>>  exchange private communications for lower cost. However, those of
>>>> us that
>>>>  have researched TLS in depth know that it can be deployed at near
>>>> zero cost
>>>>  today. This is contentious on this list.
>>>>
>>>>  It's not contentious, it's just false. Go to the pricing page for
>>>> Amazon
>>>>  cloundfront CDN <http://aws.amazon.com/cloudfront/#pricing> (I
>>>> would have
>>>>  picked Akamai, but they don't put pricing on their website), and
>>>> you pay
>>>>  33% more plus a special fee for the certificate for using HTTPS.
>>>> That's
>>>>  pretty much in line with the 40% figure. That's real cost that
>>>> everybody
>>>>  has to bear. And you will get similar numbers if you host your site
>>>> on your
>>>>  own servers.
>>>>
>>>
>>>  I think you're thinking like an engineer. You're right, they do charge
>>>  more (and I'm right those prices will continue to come down). But those
>>>  prices are already TINY. I know 33% sounds like a lot, but this is
>>> not the
>>>  primary cost of operating a business. So if you want to do a price
>>>  comparison, do an all-in price comparison. And you'll find that the
>>> cost
>>>  of TLS is less than a fraction of a percent difference in operating
>>> cost
>>>  for most businesses.
>>>
>>>  And if you're not talking about businesses, but consumers, CDNs aren't
>>>  really relevant. As an example, I run my home site at Amazon for zero
>>>  extra cost, but I did buy a 5yr $50 cert.
>>>
>>
>> So if HTTP/2.0 uses TLS by default then the relevant difference
>> will be between HTTP/2.0 and HTTP/1.1 without TLS, right?
>>
>> If this wg does a good job on the overall efficiency of the
>> protocol then I can't see any good reason why a cloudy provider
>> wouldn't be able to offer the same pricing for HTTP/2.0 and
>> cleartext HTTP/1.1 (other than having to pay a CA, but then I
>> don't think we should require that). And that's about as much
>> as should concern us here I think.
>>
>>>>   But as we look forward, with computer speeds increasing and the
>>>> Internet
>>>>  growing larger, it is clear that any cost of TLS will only get
>>>> cheaper.
>>>>
>>>>  Right along with the price of serving plaintext HTTP. That is also
>>>> getting
>>>>  cheaper.
>>>>
>>>
>>>  agree
>>
>> Disagree. The costs of moving the bits around in clear will get
>> cheaper. But the risk associated with doing that is getting high
>> enough that e.g. /. today tells me [2] twitter turned on ECDH with
>> PFS. Plaintext is not really cheaper once there's a significant
>> enough adversary.
>>
>>    [2]
>> http://techcrunch.com/2013/11/22/twitter-enables-perfect-forward-secrecy-across-sites-to-protect-user-data-against-future-decryption/
>>
>>
>>>>    As a second counter argument, some on this list observe that some
>>>> users
>>>>  that employ proxies to help filter malware over their unencrypted
>>>> streams.
>>>>   This is still possible with TLS, you just need to move to a
>>>> Trusted Proxy
>>>>  that uses TLS as well. And of course, virus writers have already
>>>> figured
>>>>  this out too. They're starting to use TLS because they know these
>>>> users
>>>>  can't currently detect viruses over encrypted channels. So
>>>> regardless of
>>>>  whether you like HTTP using TLS all the time, we still need Trusted
>>>> Proxies
>>>>  for our TLS. It's just an independent issue.
>>>>
>>>>  All true, but I don't see how that is a counter-argument. HTTPS
>>>> inspection
>>>>  is more expensive than HTTP inspection, and requires more user
>>>>  inconvenience.
>>>>
>>>
>>>  My point is that if you're interested in filtering, you already have
>>> to do
>>>  HTTPS inspection today. And that is the real driver for why
>>> companies are
>>>  employing those solutions. So this is not relevant as we decide how to
>>>  use TLS in HTTP/2.
>>
>> This wg should IMO not even consider the requirements for such MITM
>> attack boxes. (Except as a threat.) And the term "trusted proxy" is
>> pure BS. (I'm not sorry for being blunt there, but apologies if my
>> bluntness offends.) I've never seen an acceptable technical solution
>> in any of the proposals that have been made for MITM attack boxes.
>> And that's ignoring RFC 2804, and what I hope will be a new RFC on
>> that topic resulting from Vancouver. [3]
>>
>>    [3] http://tools.ietf.org/html/draft-farrell-perpass-attack
>>
>> I do think the wg should consider HTTP proxies and how those work
>> with HTTP/2.0. That is where you should consider requirements for
>> HTTP filtering or scanning. And if that requires a less efficient
>> use of HTTP/2.0 (e.g. I think it'll need two TLS sessions and some
>> new browser config and/or of leap-of-faith processing in the
>> browser) then so be it.
>>
>> But starting from an approach that assumes you can break TLS to
>> solve an HTTP problem would be sheer folly. Its been tried and
>> failed. If its tried again it'll fail again.
>>
>> Cheers,
>> S.
>>
>>
>>>>   The server, unfortunately, can't tell whether TLS is appropriate
>>>> or not,
>>>>  because the need for privacy depends on the user. As an example, we
>>>>  discussed the case where a user is downloading public information
>>>> about a
>>>>  disease. If the user is a medical student studying for an exam, it's
>>>>  probably okay in the clear. But that exact information, when sent to a
>>>>  patient for a real case, suddenly *does* need to be transmitted
>>>> privately.
>>>>   Using TLS all the time has no negative issue, while using TLS some
>>>> of the
>>>>  time does. To solve this for their users, servers should always use
>>>> TLS.
>>>>
>>>>  There are many examples we can think of, but the question of whether
>>>>  information is sensitive depends on many factors, and it's not
>>>> necessarily
>>>>  always true that the requirement for privacy comes from the client
>>>> side of
>>>>  TCP. A good TLS proxy solution would have to allow both client and
>>>> server
>>>>  to veto non-E2E encryption.
>>>>
>>>
>>>  I'd like to hear an example where the user requested a secure
>>> channel and
>>>  the server should have the right to veto. Why would either endpoint
>>> want
>>>  to, actually? Only middleware that wants to insert itself ever wants
>>> to do
>>>  that.
>>>
>>>
>>>>
>>>>
>>>>   Finally, we can discuss the middleware. Nobody knows exactly how much
>>>>  middleware exists between a user and a server. As noted, we could use
>>>>  non-authenticated encryption (non-TLS, or unverified TLS), which
>>>> would at
>>>>  least make it more difficult for passive middleware to decode the
>>>> traffic
>>>>  running through it. However, using authenticated encryption (what TLS
>>>>  already does) eliminates passive snooping, and also makes it quite
>>>>  difficult to snoop without detection. Although those on this list
>>>> like to
>>>>  say that "MITM" is everywhere, MITM is usually discoverable, and
>>>> none of
>>>>  the off-the-shelf MITM solutions today can employ MITM with complete
>>>>  invisibility. Using TLS today would eliminate all but the most
>>>>  sophisticated attacks by middleware stealing our data in motion.
>>>>
>>>>   So, to answer your question, of are media files and printers well
>>>> suited
>>>>  for TLS? Yes, of course!
>>>>
>>>>  Great! How do I get some?
>>>>
>>>>  Yoav
>>>>
>>>
>>
> 
> 
> 

Received on Monday, 25 November 2013 13:23:01 UTC