W3C home > Mailing lists > Public > public-privacy@w3.org > October to December 2014

Re: Fwd (TAG): Draft finding - "Transitioning the Web to HTTPS"

From: Eric J. Bowman <eric@bisonsystems.net>
Date: Sat, 20 Dec 2014 01:21:20 -0700
To: Mark Nottingham <mnot@mnot.net>
Cc: David Singer <singer@apple.com>, TAG List <www-tag@w3.org>, "public-privacy (W3C mailing list)" <public-privacy@w3.org>
Message-Id: <20141220012120.49ddf37a6d0fbe1dd4392f36@bisonsystems.net>
Mark Nottingham wrote:
> Eric J. Bowman wrote:
> > 
> > Mark Nottingham wrote:
> >> 
> >> What I find interesting is that by the numbers I’ve seen and talked
> >> to people about in the industry, the vast majority of people
> >> *don’t* use a proxy cache; that said, what we all seem to be
> >> concerned about are those specific cases where they are used, and
> >> they really help.
> >> 
> > 
> > Or, don't *think* they use a proxy cache. Most industry insiders
> > will say conneg is irrelevant, while using conneg to implement
> > compression, so I have low confidence that they're aware of various
> > devices between themselves and the websites they access.
> Sorry, what’s the logical link there? You’ve lost me...

Another post I made to this thread. Regardless, what sort of proxies
are in the pipeline isn't the sort of thing most folks, even industry
types, seek to know. But they are out there. You know what I'm saying --
even if my browser settings don't include the squid cache on my router,
that router may very well alter content, or the website I'm accessing
has a load balancer.

Some of these devices don't cache, so perhaps we should just talk about
proxies, or intermediaries, not caches per se. The point is, there is
an installed ecosystem out there, the extent of which even insiders
aren't really aware of, negated by HTTPS. For each instance of malware
injection, I can find an instance of threat negation, via HTTP proxy. I
guess it comes down to not throwing the baby out with the bathwater,
for me.

> > 
> > I'm about to post this link in another response...
> > 
> > http://www.cs.washington.edu/research/security/web-tripwire/nsdi-2008.pdf
> > 
> > ...but it's interesting to note that aside from squid, there's no
> > overlap between that document's list of intermediaries, and one we
> > came up with on rest-discuss a few years back. They're called
> > "transparent" proxies for a reason, even if they don't cache, and
> > HTTPS threatens that entire ecosystem.
> That “ecosystem” is generally considered to be abusive and
> illegitimate by the IETF; there’s a long history of condemnation of
> “interception” a.k.a. “transparent” proxies in the IETF, and
> enumeration of lots of problems they cause. 

Generally. While I don't disagree, my ISP/Webhost background tells me
that infrastructure is sticking around for me to play into, regardless
of IETF's stance on the matter. Especially if Net Neut is done for.

Aren't popup blockers technically transparent proxies? Too ubiquitous
to throw to the curb, when market forces are considered. Anyway, I'm
not sure all the devices/software I was referring to are technically
"transparent" by IETF's definition, sorry. I meant transparent to the
end user, even if same is an industry insider. WYSIWYG 90% of the time,
it's that other 10% most folks don't notice, like if the IT guy set up
the proxy in their browser for them, and they don't know any better.

> It also has never been a recognised mode of proxying in HTTP.

I just call 'em Layer 7 switches, which aren't going anywhere anytime
soon, especially in a post Net-Neut environment. Correct me if I'm
wrong, but regardless of IETF recognition where caching is concerned, my
directives are mostly obeyed. Why *wouldn't* a webhost/ISP with lots of
legacy content buy a Layer 7 switch, IETF concerns be damned?

> > 
> > While I can appreciate the desire for TAG to crank out a
> > producible, I have issues with anointing TLS when it doesn't
> > address the root problem of page integrity, while doing away with
> > caching I may very well need even more, if Net Neut goes the way of
> > the Dodo.
> I’m really not following you, sorry.

The PDF I linked to is hardly the only reference for intermediaries
altering content over HTTPS. If we aren't solving the content integrity
problem, then why are we doing away with shared caching, again?

Received on Saturday, 20 December 2014 08:22:01 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:49:28 UTC