W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2013

Re: Moving forward on improving HTTP's security

From: Willy Tarreau <w@1wt.eu>
Date: Thu, 14 Nov 2013 22:03:05 +0100
To: Roberto Peon <grmocg@gmail.com>
Cc: Gili <cowwoc@bbs.darktech.org>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20131114210305.GL7262@1wt.eu>
On Thu, Nov 14, 2013 at 12:50:58PM -0800, Roberto Peon wrote:
> That has to be one of the funniest things I've heard in a long time!
> 
> No, sorry, I did actually deal with developers and real deployments. Large
> numbers of them, actually.
> 
> Let me rephrase this.
> 
> Developers WILL choose a solution that is 99.99% reliable with a clear and
> understandable failure mode over a solution that is broken 10-20% of the
> time in unpredictable and unfixable ways.
> 
> Or are debating this and saying that developers will choose to deploy
> something broken that they can't fix?

No but that's generally very complicated in architectures which deal with
many interacting components being developped in parallel. The specs change
all the time as problems are faced. They want something reliable and which
looks close to what the prod looks like. Except that they start by using
private names for their servers and do ugly hacks to rewrite the hard-coded
names in HTML outputs (I've even seen an apache module running "sed" for
that purpose, I didn't know it existed). They also quickly have to deal
with multiple versions in parallel for each component and need to be able
to quickly change an address or port in their config to test if the
regression comes from their code or from the other team's. All this stuff
generally works with hard-coded IP addresses everywhere because the delays
to fill the form to get the internal services to update a DNS entry is one
week while editing win/drivers/system/etc/hosts is immediate but creates
more bugs by having different behaviours depending on the developer's
machine the test is run on (we're exactly in the reliability issue you're
talking about). At the end of the day you have a 20 IP address and 40 ports
assigned to the various components, and only 3-4 of them are used all the
day and the other ones experience a slow death with outdated code and
configs. All these IP and ports are unmanageable with HTTPS and you see
hacks everywhere to rewrite the https://mysite.com/ to
http://192.168.34.167:17890/ to make this work on their development servers.

I had to work in such an environment, it's a hell, you pay people troubleshoot
connectivity issues more often than testing their real code. Things go smooth
after the code is released however because specs don't change that much anymore
and the components can be kept as-is.

Willy
Received on Thursday, 14 November 2013 21:03:29 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:19 UTC