Re: Optimizations vs Functionality vs Architecture

On Tue, Aug 21, 2012 at 9:36 AM, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote:
> In message <CAMm+LwjSVHzRQS3W4NLBQfe+Bmpk2c5ovuOtrNjOSx1EDBDG0g@mail.gmail.com>
> , Phillip Hallam-Baker writes:
>
>>Unlike most protocol proposals, HTTP/2 is big enough to drive changes
>>in infrastructure. If HTTP/2 will go faster over a purple Internet
>>connection then pretty soon there will be purple connections being
>>installed.
>
> Provided, and this is a very big variable, that HTTP/2 brings enough
> benefits to pay for the purple internet.
>
> IPv6 should have taught everybody, that just because the IETF is
> in party mode, doesn't mean that wallets fly out of pockets.

IPv6 tried a different approach. They tried to block functionality.
Specifically they decided that if you wanted more than one computer at
home you had to demand IPv6 from your ISP.

Then when people started using NAT boxes instead they went on a foot
stomping rant trying to stamp them out. They botched IPSEC in the
process so that none of the companies I have worked for that have
IPSEC VPNs have been able to use them with the stock Windows or Mac
clients.

Only recently has it started to penetrate that NAT is the way to
deploy IPv6 and that people are not going to wait for an
infrastructure deployment to complete to get required functionality.


That is the important difference between 'purple is better' and
'purple is essential'.


> I would advocate a much more humble attitude, where we design
> and architect expecting few changes, but such that we can
> benefit from all we can cause to happen.

You are proposing to change the software running in about 4 billion
computers. There is nothing remotely humble about it.

I think we have to be realistic and not place too big a bet on purple.
But the critical thing is that we can decide what purple looks like.
We can probably get a fix to one piece of Internet infrastructure
here.

Fixing DNS at the LAN side is the biggest win we can hope for right now.


>>It is pretty clear to me that we are going to have to support some
>>form of in-band upgrade.
>
> This is actually one of the things users are asking for:  Upgrade
> from unsecure to secure connection without a new TCP.

I am pretty sure that it is impossible to do that in a way that is not
vulnerable to a downgrade attack. Encrypting the transport is of zero
security benefit unless you can be reasonably sure you are encrypting
under a key that the attacker does not know.

This was not a big concern when the threat model was bank fraud types
like the Russian Business Network. Now the threat comes from the
government that gives them safe harbor and from Iran and from various
other governments.


> A closer analysis shows that it would be even better if secured
> and unsecured requests could share a connection (think intermediary
> to server for instance.)

I think we need to stop talking about 'intermediaries' and specify
whether they are client-side or server side. It makes a very big
difference.


> Such a mixed mode might even allow oppotunistic negotiation of
> security, even before the user has pushed the "login" button.

That is actually a model I am working on in Omnibroker. I replace the
DNS client connection completely so that instead of taking DNS packets
from the local (likely mucked up) DNS resolver, I take them from a
trusted service that the user (or enterprise) chooses. This would be
likely something provided by Comodo or McAfee or Symantec as part of
their AV suite.

One big performance gain here is that there only needs to be one UDP
request/response rather than the six concurrent requests Google were
talking about. Instead of the IKEA approach of asking for connection
parts, the client just asks 'How to I get to DNS name X with protocol
Y' and back comes the IP address, port, transport security and key
authentication info.

This model is actually implicit in the traditional 'recursive
resolver' model. DNS is a pretty whacky protocol that very few
platforms even attempt to implement. They push the complexity onto the
resolver for a good reason - the protocol totally sux. Rather than fix
some obvious design oversights (e.g. how does a client resolve a
pointer to a DNS server that has a DNS name served by it) the
designers did kludges like glue records.


>>* 2016 Latency: Performance after 80% of the network has been upgraded
>>to support the new spec as fast as possible
>
> Not agreed, see above.

OK, can we agree if I stipulate that changes have to be modest and achievable?


>>* 2020 Latency: Performance when the remaining legacy can be ignored
>>as far as latency issues are concerned, people using 2012 gear in 2020
>>are not going to be getting world class latency anyway.
>
> Not agreed, I doubt HTTP/2.0 will have more than 40% market share
> by 2020.

Depends on how fast we implement.


-- 
Website: http://hallambaker.com/

Received on Tuesday, 21 August 2012 14:01:52 UTC