- From: Mark Nottingham <mnot@mnot.net>
- Date: Mon, 26 May 2014 15:10:38 +1000
- To: Greg Wilkins <gregw@intalio.com>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
Hi Greg, On 24 May 2014, at 7:43 pm, Greg Wilkins <gregw@intalio.com> wrote: > > Mark et al, > > Having only recently re-engaged with this process, the jetty team represents both fresh eyes looking at the draft, but also a wealth of experience in implementing HTTP, Websocket and SPDY. Thanks for the feedback — fresh eyes are indeed very important and useful. > I do not see a draft that is anywhere near to being ready for LC. > > At the very least in the WG there currently exists a level confusion on fundamental matters that should not result from a clear specification. But my feel is that many current threads actually represent a lingering level of dissatisfaction. > > Perhaps these issues have already been put to consensus calls, so the resistance has now be reduced to the occasional "+1", but the fact that new eyes can read the draft and still raise such concerns means that protocol is too complex; the draft insufficiently clear or both! Both complexity and clarify of specification have been and remain a concern of mine, but so far we’ve had good interoperability between the many implementations on record. That includes not a small number of implementations from people whose native language is not English; I take that as an indication we’re doing pretty well. Having said that, there’s always room for improvement. Making things better means either changing the protocol itself, or improving how it’s written down. Changing the protocol is still possible, but we’ve heard again and again that people are becoming more resistant to changes as we go on, so getting consensus on a significant change is going to take a clear proposal and agreement that it improves things considerably. On the other hand, improving the draft is relatively easy to do, provided we have good proposals for text. We’re clearly not going to make everyone satisfied with this specification; the best we can do is make everyone more-or-less equally dissatisfied. Right now, I’m hearing dissatisfaction from you and others about spec complexity at the same time I’m hearing dissatisfaction from others about schedule slips... > Important issues for me include: > • The state machine in section 5.1 is essentially a fantasy that describes an idealised protocol that the draft does not represent. In my work I very frequently lookup the TCP/IP state machine diagram that is a great reference and rarely do I need to go beyond it to understand any issue I have. However, if developers with HTTP/2 problems try to use the state machine in 5.1 as a reference, they are only going to be more confused. For example, the ES transitions are not atomic and the closed state is described as "the terminal state", but is then followed by 5 paragraphs of dense complex text describing exceptions and how some frames received in closed state have to be handled! I have posted what I think is the real state machine (http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/att-0720/http2state.txt) and it is a much more complex beastie. Thanks for making a concrete proposal here. I’ve created <https://github.com/http2/http2-spec/issues/484> to track this. > • There is also ongoing objections raised to hpack, both in it's complexity and it's impact on the rest of the protocol. Hpack has been designed to be streaming, but since common fields will only be emitted at the end, receivers are forced to buffer the entire header anyway. I’d characterise the group’s current stance as acknowledging that HPACK is more complex than we’d like, in that there isn’t an off-the-shelf algorithm that we can use (as was the case with gzip). We’ve discussed this pretty extensively a number of times, and so far the decision has been to stay with it. > There are significant concerns that hpack will be able to be correctly implemented and/or tested and there is a high probability of many incorrect implementations that may expose the protocol to security and/or DOS issues. HPACK has been implemented interoperably by many. It’s also been reviewed for security issues, and so far the risk has been felt to be acceptable. If you have a concrete issue regarding HPACK security or its operation, please bring it up; making sweeping, predictive statements doesn’t really move us forward. > • Users of the protocol, are able to send data as headers, which is unconstrained by size, flow control or segmentation. Client-side and Server-side applications are bound to collude to exploit this to obtain an unfair share of any shared HTTP/2 connections/infrastructure, which will eventually break the fundamental paradigm around which the protocol is based and eventually force the use of multiple connections again. Can you explain this a bit more, perhaps with an example? > • There is no clear layering of the protocol. SPDY has a well defined framing layer that can be separately implemented and tested. HTTP is only a semantic layered on top of that framing layer. HTTP/2 should do the same and it is a MASSIVE mistake verging on hubris to give up this fundamental good protocol practise in the name of speculative efficiency gains. Greg, I don’t think slinging around phrases like “MASSIVE mistake verging on hubris” is helpful. Many have pointed out just the opposite — that trying to create a neatly layered architecture in this case is boiling an ocean, a la BEEP and WS-*. Again, if you have a proposal for layering, we can consider that. > I respect the IETF process and if all these issues have been raised before, discussed and put to consensus, then I'm put myself on mute and get on with implementing the protocol. But the process does sometimes get it wrong and we really don't want to have a HTTP/2.1 any time soon. > > So perhaps for a such an important protocol, rather than rush forward to LC and an RFC by the end of the year, perhaps pause, step back for a while, solicit some wider review and listen to some new voices and be prepared to re-evaluate if all the complexity is really justified? I’d go further and say that the process often gets it more wrong than right, and that predicting success of a standard is notoriously difficult. IME, holding a specification up in the process does not necessarily make it any better, and can often make it worse. The overwhelming preference expressed in the WG so far has been to work to a tight schedule. HTTP/3 has already been discussed a bit, because we acknowledge that we may not get everything right in HTTP/2, and there are some things we haven’t been able to do yet. As long as the negotiation mechanisms work out OK, that should be fine. In other words, while there was a ~15 year gap between HTTP/1.1 and HTTP/2, it’s very likely that the next revision will come sooner. While we don’t want to needlessly rev the protocol, we also don’t want to turn this into a five+ year effort to design the perfect protocol — if we get it wrong, we can learn from those mistakes and correct them. There are (at least) two months before we’re talking about handing these documents to the IESG. That’s time that you can spend helping us to clarify the specifications and raise concrete technical issues. As I said, the schedule listed in my mail was an ideal — it’s very possible we’ll come in later than that, especially if substantive issues come in. Cheers, -- Mark Nottingham http://www.mnot.net/
Received on Monday, 26 May 2014 05:11:08 UTC