RE: Method Mania

Patrick,

When you say, “We have seen it more often than I care to admit…”,

are you referring to failures due to HTTP methods, new frame types, or something else?

 

With respect to TLS failures, are those actual TLS related failures or are they symptoms of the above problem?

 

 

From: Patrick Meenan <patmeenan@gmail.com> 
Sent: Sunday, July 28, 2024 9:35 AM
To: Josh Cohen <joshco@gmail.com>
Cc: Julian Reschke <julian.reschke@gmx.de>; ietf-http-wg@w3.org
Subject: Re: Method Mania

 

Sorry, I didn't mean to imply not to go forward or to necessarily find a way to thread the needle but to explicitly plan for there to be middle-boxes that break and to be deliberate about how to handle that case, even for HTTPS.

 

Failing loudly, in obvious and testable ways is WAY better than failing silently or in random ways. It makes it much easier for IT teams and software vendors to identify the root cause and test fixes. It can also be good to force failures broadly rather than pick just a subset of the population for it to work on (like requiring IPv6 features). There will be some middleboxes with issues but there will be a lot more that don't have issues and it would artificially limit the reach of the feature if you disable it for all middlebox cases.

 

I expect that new frame types for HTTP/2 and HTTP/3 would be more compatible than new methods just because devices are already parsing the existing headers and streams for content and would be more likely to error out when seeing a method they don't understand (but, odds are, new frames will have some number of devices that fail as well).

 

We have seen it more often than I care to admit, with the rollouts for HTTP/2, brotli, post-quantum TLS and now with compression dictionaries (which, you'd think the brotli rollout would have prepared devices for handling content-encoding negotiation in a compatible way, but nope).

 

The devices tend to fail the connections in painful ways, like closing the whole connection when it sees payload content it doesn't like (wiping out a bunch of multiplexed requests on a HTTP/2 connection for example).

 

Here is the site we're currently sending IT admins to when they get reports of TLS failures with the post-quantum rollout (mostly because of broken middleboxes): https://tldr.fail/

 

I'd encourage the team to continue without trying to get too fancy, just expect there will be some ecosystem cleanup needed when it is rolled out and to plan for it.

 

On Sun, Jul 28, 2024 at 1:33 AM Josh Cohen <joshco@gmail.com <mailto:joshco@gmail.com> > wrote:

Same here..  Patrick also said:

"The better question is under what circumstances do we want to allow those devices to "break" and force them to fix the implementations?"

 

Maybe a reasonable interpretation of Patrick's statement is that it's time to be bold.  HTTP/1.1 RFC2616 was published in 1999.  It's the 25 year anniversary. 🥳  In the intervening years, the IETF has done a great job evolving the transport.  That's created the foundation for things we couldn't do back then.   I don't think it was a coincidence that Lisa Dusseault was in the room.  The universe is speaking to us.  Maybe it's time for a WebDAV re-spin..  The web could also have standardized pub/sub.  

 

If we add new functionality that users and devs want, and makes admin life easier, that could be helpful in driving better implementations, and uptake of HTTP/2/3 and masque proxying.

 

 

 

 

 

On Sat, Jul 27, 2024 at 10:07 PM Julian Reschke <julian.reschke@gmx.de <mailto:julian.reschke@gmx.de> > wrote:

On 27.07.2024 16:44, Patrick Meenan wrote:
>
>
> On Sat, Jul 27, 2024 at 4:23 AM Julian Reschke <julian.reschke@gmx.de <mailto:julian.reschke@gmx.de> 
> <mailto:julian.reschke@gmx.de <mailto:julian.reschke@gmx.de> >> wrote:
>
>     On 26.07.2024 00:27, Josh Cohen wrote:
>      > On the httpwg agenda at IETF 120 were a proposal for a new QUERY
>     method
>      > and Braid, which has subscription functionality that overloads
>     the GET
>      > method.
>      >
>      > What I am curious about is if, at this point in the evolution of the
>      > web, it is now safe to add new methods for new functionality.
>     I've been
>      > reading up on HTTP/2/3 and it seems that nowadays, connections are
>      > end-to-end secure and are essentially tunneled through middle boxes,
>      > including HTTP/1.1 proxies. I'm still just wrapping my head around
>      > MASQUE, but it looks like it can handle arbitrary methods.  Similarly
>      > origin servers have evolved to support arbitrary methods.
>
>     It always has been "safe", when https was used.
>
>
> https is not "safe" in practical terms because of middleboxes that
> intercept the connections. It is very common in enterprise deployments
> where they install local trust anchors on the client devices and use
> mitm software to inspect the traffic.
> ...

I meant "safe" wrt deploying new HTTP methods.

When was the last time you encountered a problem?

Best regards, Julian








 

-- 

---
Josh Cohen 

Received on Thursday, 1 August 2024 01:32:17 UTC