Re: [EME] HTTPS performance experiments for large scale content distribution

On Mon, Oct 27, 2014 at 10:16 AM, Mark Nottingham <mnot@mnot.net> wrote:

> Hi Mark,
>
> That’s interesting. I’m assuming by “optimisations that, with HTTP, can
> avoid data copies to/from user space” you mean sendfile() (since you’re a
> FreeBSD shop).
>

​Yes.​


> This reminds me very much of discussions around HTTP/2. TCP splice and
> sendfile() aren’t nearly as useful in a framed protocol, and while we’ve
> increased the maximum frame size to make it possible to accommodate them,
> it’s very likely that none of the browsers will ever negotiate anything
> bigger than a 16K frame size, meaning that these techniques are essentially
> useless with the new protocol. For us (i.e., the HTTP WG), this was an
> explicit design tradeoff; the benefits (especially in terms of congestion
> control and other network effects) of using one connection and having
> responsive multiplexing outweighed these concerns.
>
>  So, it’s really gratifying to see that you see the possibility to reduce
> the overhead of TLS (as compared to zero-copy plaintext) so much.
>
> If they’re public, would you mind sharing the optimisations you’re looking
> at? I.e., is it just availability of AES-NI, or something else?
>

​AES-NI is the Intel optimizations for AES operations, right ? We were
already using that in our test, ​I believe.

I don't think we have anything more than the opinion of some engineers as
to what might be possible and I wouldn't assume that it would work with
HTTP/2 framing. We don't have a project to actually do these optimizations
as we don't have a project to migrate everything to HTTPS at this time.

...Mark




>
> Cheers,
>
>
> > On 25 Oct 2014, at 5:00 am, Mark Watson <watsonm@netflix.com> wrote:
> >
> > All,
> >
> > We have done some testing ​on the Netflix CDN ​with HTTPS​. We dedicated
> several servers to serving only HTTPS traffic and directed traffic from our
> Silverlight clients to those servers in order to measure the serving
> capacity, as compared with similarly situated servers serving over HTTP.
> >
> > We​ discovered that with our existing hardware/software stack ​[1] ​we
> would incur a capacity hit of between 30-53%​ using HTTPS​ depending on the
> server hardware/software version. This is due in part to the computational
> overhead of encryption itself (despite use of Intel hw acceleration) and in
> part to the unavailability of optimizations that, with HTTP, can avoid data
> copies to/from user space. This is not a capacity hit we ​could absorb in
> the short term and we estimate the costs over time would be in the $10’s to
> $100’s of millions per year.
> >
> > Our current rough estimates indicate that, over the coming year we could
> implement additional software optimizations which could potentially reduce
> ​the size of this overhead ​by around​ 30%​ and with modified hardware
> (over the next several years) ​by around 70-80%. ​We have not decided to do
> this, it's just an illustration of technical feasibility.
> >
> > ​I think it's unreasonable to expect that standards action alone can be
> successful in the face of such costs​. What is needed is a collaborative
> discussion to work towards solutions and on timeframes that are not
> cost-prohibitive.
> >
> > ...Mark
> >
> > PS: For the avoidance of any doubt, I am talking here only about
> delivery of content that is already encrypted at rest on the server. We
> have many mechanisms in place, including HTTPS, to protect sensitive user
> data such as account details, credit card information etc.
> >
> > [1] See https://www.netflix.com/openconnect for an overview, although
> this does not cover more recent designs
> >
> >
>
> --
> Mark Nottingham   http://www.mnot.net/
>
>
>
>

Received on Monday, 27 October 2014 17:25:13 UTC