Re: I revised the pro/contra document

I think it could also be very useful if there were a way whereby a proxy 
could be queried for (or assert) policy in terms of:

* is https inspected (may even need to be able to signal this per 
stream)
* is tunnelling permitted etc
* what protocols and versions are supported

so that proper feedback can be given to the user to enable them to make 
informed choices.

In terms of the broader scope of whether "trusted proxies" deserve to 
exist or not. There has been some debate over this.  My view is that in 
the end the user has to trust something.  We don't currently have a 
system where no trust is required anywhere.  When you go to your bank 
site you trust that a bona fide CA properly issued the cert to your bank 
(after properly evaluating identity and authority for the cert holder to 
act for that domain).

Also we can rely on:

* malware authors to continue to use https.
* companies to still want to have a say in what their employees do on 
line or with company property (e.g. DLP)

I personally have more confidence trusting the edge inspection system 
run by my employer than a faceless corporation running a site or issuing 
certs.  I have an employment agreement with my employer, and employment 
laws protect me from a bunch of things, and I have redress in my 
jursidiction designed for issues between employers and employees (so not 
requiring mounting enormously expensive law-suits).  Not so if I have an 
issue with something done by a CA or website operator on the other side 
of the world.  If my company is running an off-the-shelf solution, there 
can be a level of confidence around that as well, in terms of the known 
capability of that software to breach my privacy or whatever as opposed 
to scan content for malware, or prevent me uploading sensitive company 
docs etc.

The current MITM systems are more clandestine than they should be purely 
because browsers don't have a way of knowing whether the root cert is 
real or not and so cannot warn users.  Proxies can't tell clients 
either.  There should be a deterministic way to signal these things to 
the client (e.g. mark a cert as being used for MITM or use proxy <-> 
client protocol).  Of course this relies on trust.  I don't think we can 
completely eliminate everywhere all need to trust, and attempting to do 
that compromises design and the opportunities we could have if we 
instead accepted the need to trust, and just made sure we knew what we 
were trusting and why.

Adrien




------ Original Message ------
From: "Peter Lepeska" <bizzbyster@gmail.com>
To: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>
Cc: "Adrien de Croy" <adrien@qbik.com>; "Tim Bray" 
<tbray@textuality.com>; "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Sent: 26/11/2013 10:09:20 a.m.
Subject: Re: I revised the pro/contra document
>"What I've been saying (repeatedly, sorry:-) is that if a
>solution for inbound malware scanning or similar is developed
>for HTTP, then that needs to be done without breaking TLS, and
>that standardising a generic MITM attack on TLS would mean
>breaking TLS, which is used by many more protocols than just
>HTTP"
>
>I agree with this point. I think we need to come up with a 
>protocol-supported way to solve the problems of trusted proxies without 
>modifying TLS.
>
>Peter
>
>
>On Mon, Nov 25, 2013 at 8:22 AM, Stephen Farrell 
><stephen.farrell@cs.tcd.ie> wrote:
>>
>>Hiya,
>>
>>On 11/24/2013 08:03 PM, Adrien de Croy wrote:
>> >
>> > Hi Stephen
>> >
>> > it's not clear to me in your last para about attack proxies.
>> >
>> > Are you saying that we should only be able to scan http, and not 
>>https?
>>
>>I don't know to be honest. I'd say that'd be part of figuring
>>out the non-trivial answers as to how to handle proxies in
>>HTTP for which I think Mark has an open issue in the tracker.
>>
>> > Or that we need to fix things so we can scan https without
>> >
>> > a) breaking TLS
>> > b) deploying MITM.
>>
>>What I've been saying (repeatedly, sorry:-) is that if a
>>solution for inbound malware scanning or similar is developed
>>for HTTP, then that needs to be done without breaking TLS, and
>>that standardising a generic MITM attack on TLS would mean
>>breaking TLS, which is used by many more protocols than just
>>HTTP.
>>
>>S.
>>
>> >
>> > Adrien
>> >
>> >
>> > ------ Original Message ------
>> > From: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>
>> > To: "Mike Belshe" <mike@belshe.com>; "Yoav Nir" <synp71@live.com>
>> > Cc: "Tim Bray" <tbray@textuality.com>; "Mike Bishop"
>> > <Michael.Bishop@microsoft.com>; "Alexandre Anzala-Yamajako"
>> > <anzalaya@gmail.com>; "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
>> > Sent: 25/11/2013 3:15:34 a.m.
>> > Subject: Re: I revised the pro/contra document
>> >>
>> >>
>> >> A general point - *please* don't talk about adding "privacy"
>> >> to HTTP - we're discussing confidentiality and server-auth.
>> >> Making confidentiality or both be the default for HTTP/2.0
>> >> will be a wonderful thing IMO. But the end result of doing
>> >> that is just not "privacy" and folks sloppily arguing that
>> >> you do get privacy as a result are just damaging the argument
>> >> to get confidentiality by default.
>> >>
>> >> On 11/24/2013 10:05 AM, Mike Belshe wrote:
>> >>>  On Sun, Nov 24, 2013 at 1:00 AM, Yoav Nir <synp71@live.com> 
>>wrote:
>> >>>
>> >>>>   On 24/11/13 12:53 AM, Mike Belshe wrote:
>> >>>>
>> >>>>   Alexandre -
>> >>>>
>> >>>>   Your question is a reasonable one. And personally, I agree with 
>>Tim,
>> >>>>  that TLS is useful for all data transmitted over HTTP.
>> >>>>
>> >>>>   In the case of printers, TLS is absolutely suitable, and in 
>>fact
>> >>>>  employed today at most fortune-500 companies. Imagine employees
>> >>>> being able
>> >>>>  to snoop insider information as it flows over wifi to the 
>>printers?
>> >>>>   Corporate IT guys long ago fixed this problem. Even for home 
>>use,
>> >>>> do you
>> >>>>  want your neighbor to steal your taxes as you print them?
>> >>>>
>> >>>>  My home does not have the IT department of a Fortune 500 
>>company.
>> >>>> I'm sort
>> >>>>  of a techie guy, so I can probably figure out the web interface 
>>of a
>> >>>>  network-attached printer. But how can I get a certificate for my
>> >>>> printer? I
>> >>>>  know you believe that the tools will magically appear to make
>> >>>> certificate
>> >>>>  management easy. But if the numbers stated are correct, that 30% 
>>of
>> >>>>  websites already have valid certificates, then the growth of 
>>HTTPS
>> >>>> will not
>> >>>>  be enough to make tools where none existed before.
>> >>>>
>> >>>
>> >>>  I doubt you would bother to install a valid certificate; you'd 
>>just
>> >>> use a
>> >>>  self-signed cert or something, and because you've said you don't
>> >>> care about
>> >>>  it, you'd ignore the warnings.
>> >>
>> >> First Yoav is entirely correct that getting a certificate that 
>>chains
>> >> up to a browser-trusted root is not doable for printers except 
>>within
>> >> enterprises that install their own roots into employee browsers. 
>>That
>> >> nicely shows up the benefit of the http:// URI via opportunistic 
>>TLS
>> >> in HTTP/2.0 approach again. That's the approach we should take. It
>> >> needs no new PKI tooling or admin to solve a problem that we have
>> >> repeatedly failed to solve over the last 20 years. The "problem" I
>> >> mean there is a PKI enrollment setup that just works well 
>>everywhere.
>> >> There isn't one. Not unless you completely re-do the CA business
>> >> model. And assuming that would not be a good thing when designing
>> >> HTTP/2.0. (The CA business model will evolve sure, and maybe in a
>> >> good direction, but we cannot assume that.)
>> >>
>> >> I find Mike's response very odd. Browsers have been adding UI to 
>>make
>> >> SSC's (self-signed certs) less usable for years. Now that's not an
>> >> easy thing to get right, since there are many web sites (e.g. one 
>>of
>> >> mine [1]) that use SSCs for reasonable purposes, but once you have
>> >> users accepting SSCs (as studies show does happen) then the
>> >> server-auth aspect of TLS is damaged. (In the case of [1] I'm fine
>> >> that it cause browser barfs since I use that to mess with students'
>> >> heads. And I did have a good reason to turn on confidentiality
>> >> since I used to host some kids-football-team contact info there.
>> >> I just have never had a good enough reason to deal with a real PKI
>> >> for it.)
>> >>
>> >>    [1] https://down.dsg.cs.tcd.ie/
>> >>
>> >>>>
>> >>>>   For media content, TLS is also appropriate.
>> >>>>
>> >>>>   Mike Bishop brought up the DRM case, which is fine, but 
>>somewhat
>> >>>>  orthogonal to HTTP. You see, DRM is protecting *data at rest* 
>>(e.g.
>> >>>>  stored on a disk somewhere) rather than *data-in-motion* (e.g. 
>>being
>> >>>>  transmitted over HTTP. There are many interesting facets to
>> >>>> data-at-rest
>> >>>>  protections. But they out of scope for HTTP. HTTP is a transport
>> >>>> protocol
>> >>>>  and only deals with data-in-motion. It is worth elaborating on 
>>Mike's
>> >>>>  example, however, because he didn't mention that transmitting 
>>the
>> >>>>  DRM-protected data over HTTP contains metadata (e.g. the name of
>> >>>> the video,
>> >>>>  keywords, authors, people, links, etc) which can also be 
>>sensitive
>> >>>> itself.
>> >>>>   Should your neighbor know that you're watching porn right now? 
>>Should
>> >>>>  your neighbor know that you're researching <insert sensitive 
>>topic
>> >>>> here>?
>> >>>>   When discussing HTTP, we should be talking about data in motion
>> >>>> only, and
>> >>>>  even encoded content, like DRM content still benefits from TLS.
>> >>>>
>> >>>>   Ok, so when it comes to data in motion, how do we decide if TLS 
>>is
>> >>>>  useful?
>> >>
>> >> Here, Mike is entirely correct. The network stack cannot know when
>> >> the payload or meta-data are sensitive so the only approach that
>> >> makes sense is to encrypt the lot to the extent that that is 
>>practical.
>> >> Snowdonia should be evidence enough for that approach even for 
>>those
>> >> who previously doubted pervasive monitoring.
>> >>
>> >> HTTP is used for lots of sensitive data all the time in places that
>> >> don't use https:// URIs today. Sensitive data doesn't require any
>> >> life or death argument, it can be nicely mundane, e.g. a doctor
>> >> visit being the example Alissa used in the plenary in Vancouver.
>> >>
>> >> We now can, and just should, fix that. There's no hyperbole needed
>> >> to make that argument compelling.
>> >>
>> >>>>   First let's consider which parties are involved in a HTTP
>> >>>> transaction.
>> >>>>   There are:
>> >>>>      a) the user
>> >>>>      b) the server
>> >>>>      c) the middleware in between
>> >>>>
>> >>>>   The user, of course, expects his data to be sent privately. We 
>>don't
>> >>>>  want random others to view our communications. The only downside
>> >>>> would be
>> >>>>  if there were side effects to transmitting privately. For 
>>instance,
>> >>>> if my
>> >>>>  network were slower, or it cost more, etc - some users would be
>> >>>> willing to
>> >>>>  exchange private communications for lower cost. However, those 
>>of
>> >>>> us that
>> >>>>  have researched TLS in depth know that it can be deployed at 
>>near
>> >>>> zero cost
>> >>>>  today. This is contentious on this list.
>> >>>>
>> >>>>  It's not contentious, it's just false. Go to the pricing page 
>>for
>> >>>> Amazon
>> >>>>  cloundfront CDN <http://aws.amazon.com/cloudfront/#pricing> (I
>> >>>> would have
>> >>>>  picked Akamai, but they don't put pricing on their website), and
>> >>>> you pay
>> >>>>  33% more plus a special fee for the certificate for using HTTPS.
>> >>>> That's
>> >>>>  pretty much in line with the 40% figure. That's real cost that
>> >>>> everybody
>> >>>>  has to bear. And you will get similar numbers if you host your 
>>site
>> >>>> on your
>> >>>>  own servers.
>> >>>>
>> >>>
>> >>>  I think you're thinking like an engineer. You're right, they do 
>>charge
>> >>>  more (and I'm right those prices will continue to come down). But 
>>those
>> >>>  prices are already TINY. I know 33% sounds like a lot, but this 
>>is
>> >>> not the
>> >>>  primary cost of operating a business. So if you want to do a 
>>price
>> >>>  comparison, do an all-in price comparison. And you'll find that 
>>the
>> >>> cost
>> >>>  of TLS is less than a fraction of a percent difference in 
>>operating
>> >>> cost
>> >>>  for most businesses.
>> >>>
>> >>>  And if you're not talking about businesses, but consumers, CDNs 
>>aren't
>> >>>  really relevant. As an example, I run my home site at Amazon for 
>>zero
>> >>>  extra cost, but I did buy a 5yr $50 cert.
>> >>>
>> >>
>> >> So if HTTP/2.0 uses TLS by default then the relevant difference
>> >> will be between HTTP/2.0 and HTTP/1.1 without TLS, right?
>> >>
>> >> If this wg does a good job on the overall efficiency of the
>> >> protocol then I can't see any good reason why a cloudy provider
>> >> wouldn't be able to offer the same pricing for HTTP/2.0 and
>> >> cleartext HTTP/1.1 (other than having to pay a CA, but then I
>> >> don't think we should require that). And that's about as much
>> >> as should concern us here I think.
>> >>
>> >>>>   But as we look forward, with computer speeds increasing and the
>> >>>> Internet
>> >>>>  growing larger, it is clear that any cost of TLS will only get
>> >>>> cheaper.
>> >>>>
>> >>>>  Right along with the price of serving plaintext HTTP. That is 
>>also
>> >>>> getting
>> >>>>  cheaper.
>> >>>>
>> >>>
>> >>>  agree
>> >>
>> >> Disagree. The costs of moving the bits around in clear will get
>> >> cheaper. But the risk associated with doing that is getting high
>> >> enough that e.g. /. today tells me [2] twitter turned on ECDH with
>> >> PFS. Plaintext is not really cheaper once there's a significant
>> >> enough adversary.
>> >>
>> >>    [2]
>> >> 
>>http://techcrunch.com/2013/11/22/twitter-enables-perfect-forward-secrecy-across-sites-to-protect-user-data-against-future-decryption/
>> >>
>> >>
>> >>>>    As a second counter argument, some on this list observe that 
>>some

Received on Monday, 25 November 2013 23:29:00 UTC