- From: Adrien W. de Croy <adrien@qbik.com>
- Date: Thu, 13 Sep 2012 23:46:22 +0000
- To: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>, "Phillip Hallam-Baker" <hallam@gmail.com>
- Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
------ Original Message ------ From: "Stephen Farrell" <stephen.farrell@cs.tcd.ie> To: "Phillip Hallam-Baker" <hallam@gmail.com> Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org> Sent: 14/09/2012 2:56:21 a.m. Subject: Re: Semantics of HTTPS >On 09/13/2012 02:47 PM, Phillip Hallam-Baker wrote: > >> >>3) Provide a comprehensive mechanism that is conditioned on informed >>consent. >> > > >I'm not at all sure that this option is even feasible for https. > >There is a 4th option: leave the e2e semantics as-is and write an >RFC called "HTTPS MITM considered harmful" that explains the >issues and trade-offs and says why we don't want to standardise >that (mis)behaviour. > "misbehaviour" in this context is a subjective opinion. I think that pretty much amounts to sticking our heads in the sand. If it's considered so harmful, why is pretty much every proxy vendor implementing it. And is it more or less harmful than the harm caused by viruses, or browser hijacking sites which use HTTPS, for which we'd otherwise have no defence. I don't know that it gets anywhere trying to make everyones' decisions for them. We argue why would anyone want to have their https traffic inspected, and assume that noone in their right mind would ever want that for any reason, even if the alternative was no access at all. I propose you wouldn't actually have to look very far to find people who would trust their company enough to proceed to facebook or google search trusting that it was being scanned for malware. You can't avoid a requirement to trust something. Trying to avoid it, or pretending the requirement isn't there is IMO what causes security and trust problems in the first place. Adrien > > >S > > >
Received on Thursday, 13 September 2012 23:46:49 UTC