- From: Cory Benfield <cory@lukasa.co.uk>
- Date: Thu, 3 Nov 2016 09:08:02 +0000
- To: "Roy T. Fielding" <fielding@gbiv.com>
- Cc: Julian Reschke <julian.reschke@gmx.de>, Kazuho Oku <kazuhooku@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
> On 2 Nov 2016, at 22:50, Roy T. Fielding <fielding@gbiv.com> wrote: > > Sorry, I was talking about the past. You made a general comment about the > nature of protocol extensibility mechanisms based on the evidence of existing > implementations having bugs. But existing implementations have bugs for every > aspect of the protocol, for the same reason: some developers don't read > specifications. The only distinction here is how long it takes for someone > to get around to testing a use case which triggers the bug and results in > a bug report which can then be fixed by some developer. Sure. But my comment was about having *well-oiled* extension mechanisms. Put another way, if you have exactly one place to put an extension, then that is where all extensions will be put. This will greatly increase the speed which with the bug-finding use-case will get tested, and the bug will get fixed. This was my point about 103: while we have had the 1XX extension point since 1996, we haven’t *used* it since 1999. It is hardly a surprise, then, that a number of implementations have bugs in their support of this little-used aspect of the protocol. This is why I have been getting up in arms about the reaction to my statement. I didn’t say we can’t ship 103 with no negotiation, or that implementations shouldn’t fix their 103 handling. Of course we can, and of course they should. Hell, I’m still reporting bugs and patching them for 103 across the Python ecosystem, and I’ve reported bugs in non-Python implementations as well. What I was getting at is that we have a problem regarding the shipping of those bugfixes, which I’ll address more below. >> That fixes nothing. While I’m sure it’s emotionally satisfying to metaphorically slap developers around the face and scream “OBEY. THE. RFC” at them, the very existence of RFCs 7230+ is a good reminder that developers almost universally do not pay attention to the RFCs: they pay attention to the protocol *as encountered on the web*. That protocol has had for its entire lifetime only two 1XX status codes: 100 and 101. Protocol implementations almost universally handle those status codes correctly: just no others in the range. And that hasn’t mattered because no-one is *using* those other codes. > > Please don't lecture me on the nature of HTTP deployment. You are aware of > maybe 5% of deployed implementations. I get regular questions from a much > broader scope, because my name is on the specs. Your premise is mistaken, > and the lesson you took from it is simply wrong. Fair enough. I have no doubt you’re better informed on this topic than I am. All I know is that from where I am standing I see a vast sea of non-compliance and have never seen a non-100 or 101 status code, in any form. I drew my position from the evidence I had on hand. > In my experience, most developers (especially open source developers like me) > are happy to fix bugs in their software, particularly when they are backed > by specification text that is now 21 years old (and still counting). Sure. And, as noted above, I didn’t say no-one was going to fix those bugs. I said that those fixes solve the problem we have in the future, but they don’t solve the problem we have today. Again, more on this below. > I don't care how widely they are deployed. Not a single client on your > list existed when 1xx was invented. Not a single one will still exist > (in any meaningful sense) more than ten years from now. They are broken > in terms of the protocol usage, today, regardless of how Kazuho chooses > to negotiate the feature. Adding another 10 or so bytes to every request > is not going to make them any less broken. To look at your age argument for a moment, yes they did. Python’s httplib was on my list, and I just downloaded Python 1.1, which was released in October 1994: httplib is present in that distribution as a HTTP/1.0 client implementation. (Not a very good one, I should note, but present nonetheless, and built off of draft-ietf-iiir-http-00 no less). Python’s httplib almost certainly *will* exist ten years from now because it is shipped with RHEL 7, which does not leave active support until 2024. So not only will httplib exist in 2024 but it will exist in the exact form it does today: no 103 fix present. > Let them surface the 103. Force the bug to be fixed. Encourage people to > upgrade their software. That is far less expensive than sending extra bytes > on every request for the next 40 years. But regardless of all of the above, this is the crux of the issue. Yes, we can choose to say “if you don’t tolerate 103 then bad luck, fix your software”. And either 103 will fail to be adopted, or people will fix their software. The problem isn’t “will people fix their software”: of course they will. The problem is how long will it take, and what effect will that have on the usage of the 103 status code. Today I am working on patches for all the non-httplib implementations in the Python ecosystem. In the case of Twisted, that patch will be released no earlier than Twisted 17.1. That’s moderately problematic, as every released distribution of Red Hat Enterprise Linux today ships Twisted 8.2 or Twisted 12.1. Optimistically, RHEL 8 *might* get Twisted 17.1, but it also might not. If it does, RHEL 7 will stop being supported in 2024, which means that the last supported version of Twisted that doesn’t handle 103 will die in 2024. The same problem exists for all other deployed versions of Python or Python software: if it is shipped with a distribution like Red Hat then it’s going to take a decade to get that version out of support. For Python 2.7’s httplib, which will not be unsupported by the Python team until 2020, there is no path to landing a patch for 103 support or even 1XX tolerance. Every shipped version of Python 2.7 will forever be unable to tolerate 103 responses. What I am getting at here is that “encourage people to upgrade their software” is not a winner. I experience this fight every day as someone who ships open source software, and so far my hit rate is 0.000. People are unwilling to upgrade their software, even in the face of quite substantial feature or functionality deficits. Anyway, I’m not really interested in this discussion beyond this point. The HTTP WG can do whatever it likes within the bounds of what is already specified, and it can choose to decide that these implementations have made their own bed and now must lie in it. But ultimately the burden of this breakage doesn’t fall on the person who wrote the code: it falls on the user who finds themselves unable or unwilling to upgrade. And it seems to me that we’d be punishing the wrong person. Cory
Received on Thursday, 3 November 2016 09:08:37 UTC