W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2016

Re: New Version Notification for draft-kazuho-early-hints-status-code-00.txt

From: Roy T. Fielding <fielding@gbiv.com>
Date: Wed, 2 Nov 2016 15:50:02 -0700
Cc: Julian Reschke <julian.reschke@gmx.de>, Kazuho Oku <kazuhooku@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <08467A5B-78F7-4993-B3E3-C9A24F16D02E@gbiv.com>
To: Cory Benfield <cory@lukasa.co.uk>
> On Nov 2, 2016, at 2:59 AM, Cory Benfield <cory@lukasa.co.uk> wrote:
> 
> 
>> On 1 Nov 2016, at 22:50, Roy T. Fielding <fielding@gbiv.com> wrote:
>> 
>>> On Nov 1, 2016, at 1:17 AM, Cory Benfield <cory@lukasa.co.uk> wrote:
>>> 
>>> I’m right there with you Julian. The 1XX response category gets to be another marker pointing us to the lesson the IETF has been learning for the past decade or so: extension points on a specification that no-one uses rust over time and become unusable.
>> 
>> No.  What I've learned is that every feature in every protocol is poorly
>> implemented by some poor soul who thinks they deserve special consideration
>> for their inability to interoperate with the future.  I have, in the past,
>> consistently refused such considerations.
> 
> I don’t understand where you think anyone who wrote a broken implementation is asking for special consideration. The only HTTP implementation I fully wrote is a HTTP/2 implementation that can handle 1XX codes per the specification. I certainly don’t need the special consideration for libraries I maintain, because I’ll be shipping patches for them.

Sorry, I was talking about the past.  You made a general comment about the
nature of protocol extensibility mechanisms based on the evidence of existing
implementations having bugs.  But existing implementations have bugs for every
aspect of the protocol, for the same reason: some developers don't read
specifications.  The only distinction here is how long it takes for someone
to get around to testing a use case which triggers the bug and results in
a bug report which can then be fixed by some developer.

> I am simply informing the IETF that the vast majority of widely deployed HTTP client libraries today will fail in the face of the 103 status code. Since I looked at Python I have gone back and looked at Ruby, Javascript, and Go, and the same remains true there. All of these languages have implementations that surface the 103 to the user and swallow the 200. So far curl and wget are the only widely-deployed non-browser implementations I have found that handle the 103 the way that RFC 7230 says they should. While we should praise those implementers, we should acknowledge that curl and wget are so widely deployed that they get bug reports for all the wacky edge cases in the way that the smaller implementations do not. (Yes, I called 1XX a wacky edge case, let’s all move on from it.)

Thanks for the information.  FTR, 100 and 101 were originally defined.
102 was an extension.  There have been several attempts to define a 103
for various reasons, but they were deemed close enough to 102 that they
were not worth pursuing further.  There are also many uses of 1xx status
codes within non-standard systems for which no registration was necessary,
mostly for the sake of hinting at optional features or local status.

>>> In this case, I think the 1XX problem is more oversight than anything else. The problems in all these cases are tractable, and can be fairly easily fixed. It’s just that someone needs to spend that time.
>> 
>> They are easily fixed.  Force the broken implementations to die in a miserable
>> way and teach people not to write crappy code.
> 
> That fixes nothing. While I’m sure it’s emotionally satisfying to metaphorically slap developers around the face and scream “OBEY. THE. RFC” at them, the very existence of RFCs 7230+ is a good reminder that developers almost universally do not pay attention to the RFCs: they pay attention to the protocol *as encountered on the web*. That protocol has had for its entire lifetime only two 1XX status codes: 100 and 101. Protocol implementations almost universally handle those status codes correctly: just no others in the range. And that hasn’t mattered because no-one is *using* those other codes.

Please don't lecture me on the nature of HTTP deployment.  You are aware of
maybe 5% of deployed implementations.  I get regular questions from a much
broader scope, because my name is on the specs. Your premise is mistaken,
and the lesson you took from it is simply wrong.

In my experience, most developers (especially open source developers like me)
are happy to fix bugs in their software, particularly when they are backed
by specification text that is now 21 years old (and still counting).

I don't care how widely they are deployed.  Not a single client on your
list existed when 1xx was invented.  Not a single one will still exist
(in any meaningful sense) more than ten years from now.  They are broken
in terms of the protocol usage, today, regardless of how Kazuho chooses
to negotiate the feature.  Adding another 10 or so bytes to every request
is not going to make them any less broken.

Let them surface the 103.  Force the bug to be fixed.  Encourage people to
upgrade their software.  That is far less expensive than sending extra bytes
on every request for the next 40 years.

....Roy
Received on Wednesday, 2 November 2016 22:50:32 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 November 2016 22:50:37 UTC