W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2016

Re: New Version Notification for draft-kazuho-early-hints-status-code-00.txt

From: Kazuho Oku <kazuhooku@gmail.com>
Date: Wed, 2 Nov 2016 21:11:51 +0900
Message-ID: <CANatvzza+J3eumC1UimNT0qQ8LBOGeA8h=frp-RqDeWAmekbcA@mail.gmail.com>
To: "Roy T. Fielding" <fielding@gbiv.com>
Cc: Cory Benfield <cory@lukasa.co.uk>, Julian Reschke <julian.reschke@gmx.de>, HTTP Working Group <ietf-http-wg@w3.org>
2016-11-02 7:50 GMT+09:00 Roy T. Fielding <fielding@gbiv.com>:
>> On Nov 1, 2016, at 1:17 AM, Cory Benfield <cory@lukasa.co.uk> wrote:
>>
>>
>>> On 1 Nov 2016, at 06:32, Julian Reschke <julian.reschke@gmx.de> wrote:
>>>
>>> On 2016-11-01 02:32, Kazuho Oku wrote:
>>>> Cory, Julian, thank you for looking into the I-D.
>>>>
>>>> Thank you for looking into the existing implementations using Python.
>>>> Your research makes it evident that some kind of negotiation is
>>>> mandatory if we are going to use 103 on the public Internet.
>>>
>>> Having to negotiate it makes me sad.
>>
>> I’m right there with you Julian. The 1XX response category gets to be another marker pointing us to the lesson the IETF has been learning for the past decade or so: extension points on a specification that no-one uses rust over time and become unusable.
>
> No.  What I've learned is that every feature in every protocol is poorly
> implemented by some poor soul who thinks they deserve special consideration
> for their inability to interoperate with the future.  I have, in the past,
> consistently refused such considerations.
>
>> In this case, I think the 1XX problem is more oversight than anything else. The problems in all these cases are tractable, and can be fairly easily fixed. It’s just that someone needs to spend that time.
>
> They are easily fixed.  Force the broken implementations to die in a miserable
> way and teach people not to write crappy code.
>
> There is absolutely no reason to negotiate 1xx codes.  If some application fails
> because their developers can't read, it is not our responsibility to work around them.
> If we do anyway, the entire Internet goes to crap (just like it has for HTML).
> At most, we use User-Agent or Server to flag non-compliant implementations and
> work around only specific versions of known-to-be-deployed breakage.

Thank you for your comments.

It is encouraging to see your comment that the effort to improve the
web should not be obstructed by the existence of broken
implementations.

OTOH, for 103 Early Data, I think sending a request header that
indicates that the client is going to recognize the headers contained
in the informational response might be beneficial.

For example, a client that recognizes link: rel=preload in 103 could
send an "Accept-EH: Link" header to notify the server that it's
operation would be optimized by the use of 103.

For clients that do not recognize any of the headers sent using 103,
there'd be no reason to send an informational response. Sending one is
just waste of bandwidth. So a server can just omit the 103 response to
clients that do not send an Accept-EH header.

I also think that we should have a warning that sending 103 against
arbitrary HTTP clients may cause interoperability issues, but I now
agree to Julian that it shouldn't be normative (i.e. not use the terms
defined in RFC2119).

> ....Roy



-- 
Kazuho Oku
Received on Wednesday, 2 November 2016 12:12:25 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 November 2016 12:12:27 UTC