- From: Adrien de Croy <adrien@qbik.com>
- Date: Wed, 17 Aug 2016 22:51:18 +0000
- To: "Joe Touch" <touch@isi.edu>, "Willy Tarreau" <w@1wt.eu>
- Cc: "Mark Nottingham" <mnot@mnot.net>, "tcpm@ietf.org" <tcpm@ietf.org>, "HTTP Working Group" <ietf-http-wg@w3.org>, "Patrick McManus" <pmcmanus@mozilla.com>, "Daniel Stenberg" <daniel@haxx.se>
------ Original Message ------ From: "Joe Touch" <touch@isi.edu> >They want something different for a variety of reasons - the same kind >of airtight logic by which TBL developed HTTP instead of using FTP (he >said that you'd only typically need one file from a location, so why >open 2 connections? now we're stuck trying to mux control and data >rather than having a proper solution that already existed at the time - >it took nearly a decade for HTTP servers to catch up to the performance >of FTP). > Whilst I've been finding this discussion very informative and interesting, I have to raise an objection on this point. FTP was never going to be suitable for the web, and a very simple RTT analysis shows that. Apart from initial 3 way TCP handshake and close, which is the same for both, with http you have a request and a response, whereas FTP requires you to wait for the server welcome, log in, negotiate another port, set up a data connection in addition to retrieving the file So it's at minimum 5 round trips more. Then try adding all the firewall issues due to transmitting data connection endpoint information over the control connection and it's no surprise FTP is not favoured for downloads. So FTP was never going to be a "proper solution" for the web without a complete re-architecture. Adrien >
Received on Wednesday, 17 August 2016 22:51:50 UTC