W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: The TLS hammer and resource integrity

From: Roberto Peon <grmocg@gmail.com>
Date: Wed, 28 Mar 2012 23:30:03 +0200
Message-ID: <CAP+FsNep9bYoOfCPp0Nd5tc2emY7psM-WkEHpq-YHYZCDjhLHA@mail.gmail.com>
To: Poul-Henning Kamp <phk@phk.freebsd.dk>
Cc: Mike Belshe <mike@belshe.com>, patrick mcmanus <pmcmanus@mozilla.com>, ietf-http-wg@w3.org
On Wed, Mar 28, 2012 at 8:22 PM, Poul-Henning Kamp <phk@phk.freebsd.dk>

> In message <CAP+FsNeB8d0i=
> 8ZCkhWZygYw7wUcfGtHRqoiJw1EWENZy_YnUg@mail.gmail.com>
> , Roberto Peon writes:
> >I'm not sure that SPDY with its one SSL today is more expensive than
> >today's HTTP with its 6 to 40-something connections per page.
> You seem to miss the point:
> There are at least four relevant protocols to discuss:
> 1. HTTP/1.1 as we know and hate it.
> 2. SPDY
> 3. A newly design HTTP/2.0 fast-bulk-transport.
> 4. The idea much better than 1, 2, & 3 nobody has thought about yet.
> You talk as if there is only 1 & 2, ignoring #4 is particularly
> shortsighted.

huh? I'm presenting data about what we know to be good or bad. I'm
perfectly receptive to things that are or will be better.... and my
impression is that we're arguing (and expressing different opinions) about
what we believe s better.

> But lets stop beating about the bush:
> SPDY is designed for Googles needs, and it probably serves Google well,
> in particular by imposing end-to-end-TLS to keep telcos out of googls
> data and giving end-to-end user identification.
> But not everybody are Google, and just because Google hacked together
> SPDY does not mean that it should be rammed through IETF the way
> Microsoft managed to ram OOXML through ISO.

I think we agree that *nothing* should be rammed through a standards body
(gosh, that sounds painful!), and that is why we're here!

The intent of SPDY was and is to make the web (all of it) better and faster
for all. The SPDY effort has been open basically from the get-go, and
inclusive of anyone who would contribute code, effort, and data.
One of the cute things about SPDY is that it was bound to enhance other
sites' performance more than Google's... so, all in all, I think that what
you said is a grossly unfair characterization.

> Right now we have two complementary HTTP protocols, and I think that
> works because they are exactly that:  Complementary.  One offers
> cryptographic services, the other offers speed.

The site's interests often conflict with the user's interests. Many sites
don't turn on HTTPS when it would clearly benefit the user. Many sites leak
auth cookies, etc. because they just don't know any better and their users
are increasingly eschewing landlines.

The interests don't always align... what behavior do we want to incent?

> I have *REPEATEDLY* tried to inject into the HTTP/2.0 discussion that
> what we need are pluggable transport protocols, because the same
> complementary needs exist and exactly to be future compatible.
> HTTP/2.0 should be standardized as the semantics of the transported
> messages, and the protocols which transport the messages.
> Amongst these protocols should be one supporting crypto-services
> and high-feature-levels, for the money-bearing transactions on
> the web.  SPDY would fill that niche well I think.
> But there also needs another to showel pink bits across the net
> as fast as possible, with as little latency and overhead as possible,
> so the big spikes are possible to serve. (Think CNN on 9/11!)

I think there are other and/or better solutions than getting rid of
security and privacy for the user. When I do the game theory:  a site which
can deploy a lower cost solution at the detriment of a user's privacy will
likely do so. Most businesses find local, not global maxima... Is that what
we want to incent?

> All I am really asking is that SPDY becomes *a* HTTP/2.0 transport,
> rather than become *the* HTTP/2.0 transport.

I'm only hoping that we take the lessons learned from the SPDY deployments
and not throw them away :)

We know SSL has a cost. It also has significant benefits for deployment and
user security /privacy.  I'm asking that we look at the motivations of the
various actors and ensure that the net result is in the user's interest.
This definitely includes second-order effects of the decisions. If we make
serving pages too expensive to bear, pages won't be served and the user's
interests won't be served. If we make pages reasonable expensive, and so
they're still served, and offer features that benefit the users and offer
something that shades more towards the global (instead of local) maximum..
that is a good final state.

Received on Wednesday, 28 March 2012 21:30:32 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:01 UTC