W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Optimizations vs Functionality vs Architecture

From: Phillip Hallam-Baker <hallam@gmail.com>
Date: Tue, 21 Aug 2012 09:19:25 -0400
Message-ID: <CAMm+LwjSVHzRQS3W4NLBQfe+Bmpk2c5ovuOtrNjOSx1EDBDG0g@mail.gmail.com>
To: ietf-http-wg@w3.org
I am seeing a lot of overlapping discussions in the 'upgrade' thread
where the positions stem from different views as to the importance of
different types of optimization. Specifically we have a choice:

1) Optimize for the state of network conditions now

2) Optimize for the state of network conditions we expect to exist later

3) Design for some sort of balance between the two.

Unlike most protocol proposals, HTTP/2 is big enough to drive changes
in infrastructure. If HTTP/2 will go faster over a purple Internet
connection then pretty soon there will be purple connections being
installed. It will be a long time before there are no connections that
are not purple, but we can get to mostly purple pretty quick.

It is obvious that HTTP/2 has to establish a connection in every
situation where HTTP/1 will work. It is far from obvious that HTTP/2
must be a Pareto optimization on HTTP/1 and never result in worse
service regardless of the situation.


I do not like the Google approach of hyper-optimizing the protocol to
todays network conditions for that reason. Saving one packet in 20%
might seem important now but not if doing so locks us into a scheme
that will be sub-optimal later on.

In particular I do not want to see the whole design compromised for
the sake of reducing latency by a hair in Panera.

If we go for only doing in-band upgrade it means that we always have
to play the guessing game of HTTP/1.1 or 2.0 every time we try to
connect. And this will become even harder if we ever decide to do a
HTTP/3.0.


It is pretty clear to me that we are going to have to support some
form of in-band upgrade. But even if that turns out to be the fastest
choice in 2012 deciding to only do in-band upgrade means that we are
permanently locked into a sub-optimal solution in perpetuity.

2012 Latency should only be one of the deciding factors. Other factors
should be:

* Can we move to a HTTP/3.0 in this scheme? If not its a non starter.

* 2016 Latency: Performance after 80% of the network has been upgraded
to support the new spec as fast as possible

* 2020 Latency: Performance when the remaining legacy can be ignored
as far as latency issues are concerned, people using 2012 gear in 2020
are not going to be getting world class latency anyway.

* IP Port saving: We are racking up Web Services pretty fast at the
moment. At this point there are far more than 66535 Web services in
common use so port numbers are a dead letter.

* Web Services direct connect

Well Known service URLs are only a partial solution when a domain is
serving multiple Web Services. If you have 20+ Web services on your
site that all have a significant CPU load you have basically two
solutions today. The first is to plan ahead and give each service its
own DNS name so that they can be mapped to hosts as required (thus
making WKS irrelevant). The second is to deploy a 2 tier scheme in
which the front end is just a redirect to whatever back end is doing
the work. This is a very common configuration and essentially doubles
the number of machines you need and increases latency for no real
benefit other than making administration possible.

SRV and URI permit a direct connection to the host for the specific
Web Service. There is no need to do tiering just for the sake of
mapping the protocol onto the HTTP Web Service endpoint.
Received on Tuesday, 21 August 2012 13:19:53 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 21 August 2012 13:19:59 GMT