- From: Mike Belshe <mike@belshe.com>
- Date: Wed, 25 Jun 2014 13:12:28 -0700
- To: Patrick McManus <pmcmanus@mozilla.com>
- Cc: Mark Nottingham <mnot@mnot.net>, K.Morgan@iaea.org, Poul-Henning Kamp <phk@phk.freebsd.dk>, Willy Tarreau <w@1wt.eu>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>, Martin Dürst <duerst@it.aoyama.ac.jp>
- Message-ID: <CABaLYCvFxo0qPG=7o6YcpBDCVETqb6OS3-3_CpQ6KC89u0vniw@mail.gmail.com>
I know that you can make bulk-transport more efficient - this was identified before the project even got started. As it turns out, the number of folks that need super high efficiency bulk transport of large data over the internet is very low. Of those very few that do need it, many are already moving to non-HTTP protocols or simply have better solutions already. For example: - many back offices use custom protocols designed for their security & global transport requirements rather than any form of HTTP. They aren't stuck with HTTP since it is on their own networks. - streaming video transfers don't need this, because they self-throttle bandwidth anyway to save costs. - consumer bulk transfers tend to be best served by protocols like bittorrent, which are designed for this and solve much more than just the bandwidth problem. I'm not saying that nobody needs nor wants super high efficiency bulk transfer over HTTP. Sure, it would be great. But I wouldn't spend a single bit of complexity on it. Just too low pri. I propose we move forward without attempting to solve bulk transport issues. If it fits into extensions without additional work, great. If not, we'll survive. It's just another low-priority feature, of which there will always be more. Mike On Wed, Jun 25, 2014 at 7:38 AM, Patrick McManus <pmcmanus@mozilla.com> wrote: > > > > On Wed, Jun 25, 2014 at 6:56 AM, Mark Nottingham <mnot@mnot.net> wrote: > >> On 25 Jun 2014, at 7:35 pm, <K.Morgan@iaea.org> <K.Morgan@iaea.org> >> wrote: >> >> > We've been talking about jumbo frames for a week already and there >> hasn't been any resistance from "implementers". >> >> So far we’ve had a voluminous discussion among a small set of people who >> agree on generalities but not a single proposal, and only one of them has >> an actual implementation on offer. >> >> I’d suggest that the reason why the rest of the implementers list hasn’t >> jumped in is because they’re busy getting draft-13 done, which we said we >> intended to go to WGLC any day now. >> > > indeed. If the IETF process can't converge to something we can ship after > all this time, based on experience with running code both before and during > the process, there isn't a lot of point of discussing it in this forum. > Redoing the base framing at this point is fairly close to declaring IETF > fail at this stage. I would hope we wouldn't do that. > > different pov's will continue to crop up and the wg has decided to have an > extension mechanism available to try some of them out among their own > advocates. (as we know, I'm not a huge fan of that - but there it is.) This > will get runtime experience with them. Changing framing is certainly > something that can be done this way. It takes a RTT to take effect but the > swtichover can certainly take place mid stream among consensual peers. (so > it need not add a rtt of overall latency). > > it would be interesting to get experience with an extension that did this > and bundled a MAX_CONCURRENT of 1 semantic (or requirement) with it > because you can't really expect to mux successfully in such an environment. > At that point the value of running h2 is kind of questionable - h2 > fundamentally needs mux and priority. And that's why I think this whole > path is the wrong thing to do. > > I'm looking at a spdy trace right now that sends huge frames because it > was convenient for the developer to do so - and priority doesn't work at > all as a consequence. That's real feedback that improved h2 over spdy. > > -P > >
Received on Wednesday, 25 June 2014 20:12:59 UTC