W3C home > Mailing lists > Public > www-tag@w3.org > December 2014

Re: Draft finding - "Transitioning the Web to HTTPS"

From: Eric J. Bowman <eric@bisonsystems.net>
Date: Wed, 10 Dec 2014 09:59:05 -0700
To: yan@mit.edu
Cc: Tim Bray <tbray@textuality.com>, Marc Fawzi <marc.fawzi@gmail.com>, Chris Palmer <palmer@google.com>, Bjoern Hoehrmann <derhoermi@gmx.net>, Mark Nottingham <mnot@mnot.net>, Noah Mendelsohn <nrm@arcanedomain.com>, "www-tag@w3.org List" <www-tag@w3.org>
Message-Id: <20141210095905.31e492dc480925aae00de93d@bisonsystems.net>
yan wrote:
>
> Eric J. Bowman:
> > Tim Bray wrote:
> >>
> >> But I really can’t take seriously the objection that cost is a
> >> serious obstacle to widespread TLS deployment.
> >>
> > 
> > I take it seriously. While your draft makes a good point or two on
> > this issue, I'd like to offer a couple of counterpoints.
> > 
> > Broken-ness
> > 
> > I've certainly noticed an increase in invalid-cert warnings when
> > using the Web. I'm not talking about the one-time costs associated
> > with implementing SNI in a load-balanced, virtual-hosting
> > environment, I'm talking about the knock-on costs to the
> > small-business content-creator when third- and even fourth- party
> > PKI implementations are bungled.
> > 
> > Even an expired cert on the part of, say, an ad provider or even an
> > advertiser using that provider, causes a pop-up warning for users.
> > At best, the site hosting those ads loses potential click-through
> > revenue. At worst, naive users assume the problem lies with the
> > site they're using, and stop using it. Resulting in direct loss of
> > revenue, or indirect losses stemming from decreased activity on the
> > site.
> > 
> > Browsing through a descriptive link using software that doesn't
> > display the URL can make it non-obvious to experienced users, that
> > the cert in question isn't the same domain as the site being
> > accessed. And we all know that people will just move on, rather
> > than taking a moment to figure that out.
> 
> The ACME specification part of Let's Encrypt (https://letsencrypt.org)
> addresses this by making it much easier to renew certs:
> https://github.com/letsencrypt/acme-spec/blob/master/draft-barnes-acme.txt.
> I think you can just do it via a cron job.
> 
> (Let's Encrypt is launching a new cert authority in 2015 that will
> support the ACME protocol for automated cert issuance and management.)
> 

Thanks for the info, I'm just hearing about this stuff in this thread.
My problem remains, adopting HTTPS as the solution to privacy issues is
too good to be true; I'd prefer to see some other solution adopted where
I don't have to worry about user-confidence-sapping pop-up notifications
being incorrectly attributed to my website. Gew-gaws like ad services
should just silently fail, instead of warning visitors to go away.

Because in real-world terms, that impacts small-business profitability,
to solve a problem mainstream users either don't know or don't care
about; or if they do, they're satisfied with sites that don't use
cookies in the first place, negating the need for HTTPS to keep those
cookies private. Leaving destination IP address as their only exposure,
and if they know/care about that they use TOR as HTTPS doesn't solve
that problem.

>
> > 
> > Hosting costs
> > 
> > It's possible to achieve both low latency and five-nines
> > reliability on a budget using HTTP, due to the lower implementation
> > cost of redundant systems based on "obsolete" CPUs. Moving to HTTPS
> > on such hardware comes with latency increases which may negatively
> > impact profitability due to user impatience. Avoiding this latency
> > penalty requires encryption co- processing, i.e. the
> > latest-and-greatest CPUs, increasing hosting costs at the expense
> > of profitability.
> 
> It's hard to evaluate these claims without numbers. What are the CPU
> specs? What's the order of magnitude of latency increase? What are
> your hosting costs compared to, say, AWS?
> 

Bison Systems' bespoke HA hosting was based on the SPARC T1 CPU with
one FPU and one encryption unit shared by four-eight cores. I don't
recall precise numbers, but the latency of occasional HTTPS connections
is certainly noticeable; multiple concurrent HTTPS connections, more
so. The T1 is simply not designed for all-HTTPS workloads; the latency
increase could probably be measured with a stopwatch on an eight-core.

If HTTPS is only used to secure transactions, the added latency to that
of CC processing is insignificant. High startup costs for the initial 
four-core servers amortized nicely, as additional/replacement units have
steadily decreased in cost down to $60 on eBay for eight-core, nowadays.
Shaving a few ms from latency never cost-justified a CPU upgrade; the
nice thing about the T1000 server is its throughput under load, with no
latency increase below its saturation point.

We were early adopters of both the T1 CPU and AWS in 2006. Collocation
of servers, in two datacenters in different regions, was and remains
$75/2U, including a bandwidth allocation which has steadily increased.
UltraDNS' Directional DNS service implements geo-balancing, while their
SiteBacker service implements failover, entry-level cost ~$150/month.
We set up an "all systems go" test page to include the string "xyzzy"
late from the DB, basing failover on successful completion of rendering
vs. AWS just checking the server heartbeat, not the httpd or DB.

So, ~$450/month + hardware costs minimum, external storage + dedicated
DB boxes came later. I don't recall AWS pricing, as we dropped AWS
after about six months. Kind of an apples-to-oranges comparison even if
monthly costs were similar (using two "availability zones" on AWS), as
we bought hardware and incurred R&D costs optimizing the conceptual
implementation and software compilation.

That being said, given the less-than-stellar reliability of AWS over
time, the extra costs of the bespoke solution could very well have been
equalled, or even exceeded, paying out compensation per our SLA. We
never encountered downtime, and in fact drank the Kool-Aid and didn't
expect we ever would on AWS. What we really didn't likeabout AWS were
the "degradation events" increasing latency above and beyond that
inherent in any virtualized setup. Our solution was old-fashioned
virtual hosting.

Also, the quality of support from my DNS and collocation providers is
stellar, especially as compared to that of AWS, double-especially when
they have issues (according to the Web-at-large).

In retrospect, shoulda charged more and added low latency to the SLA.
That system was sold when I closed up shop, and I can no longer vouch
for it. It should also be noted that, aside from 9/11, clients we hosted
on New York Internet shared hosting never had downtime, and only rarely
encountered latency issues, dating back to Y2K.

> 
> Just a note: HTTPS Everywhere (https://eff.org/https-everywhere) is a
> browser extension maintained by EFF and has nothing to do with the TAG
> findings.
> 

Forgot about that, thanks. I'll start saying "ubiquitous HTTPS."

-Eric
Received on Wednesday, 10 December 2014 16:59:52 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:57:08 UTC