- From: Soni L. <fakedme+http@gmail.com>
- Date: Sat, 27 Jul 2024 12:01:26 -0300
- To: ietf-http-wg@w3.org
- Message-ID: <cad0ce47-71fe-4e06-b225-3b11c287a18b@gmail.com>
On 2024-07-27 11:44, Patrick Meenan wrote: > > > On Sat, Jul 27, 2024 at 4:23 AM Julian Reschke <julian.reschke@gmx.de> > wrote: > > On 26.07.2024 00:27, Josh Cohen wrote: > > On the httpwg agenda at IETF 120 were a proposal for a new QUERY > method > > and Braid, which has subscription functionality that overloads > the GET > > method. > > > > What I am curious about is if, at this point in the evolution of the > > web, it is now safe to add new methods for new functionality. > I've been > > reading up on HTTP/2/3 and it seems that nowadays, connections are > > end-to-end secure and are essentially tunneled through middle boxes, > > including HTTP/1.1 proxies. I'm still just wrapping my head around > > MASQUE, but it looks like it can handle arbitrary methods. > Similarly > > origin servers have evolved to support arbitrary methods. > > It always has been "safe", when https was used. > > > https is not "safe" in practical terms because of middleboxes that > intercept the connections. It is very common in enterprise deployments > where they install local trust anchors on the client devices and use > mitm software to inspect the traffic. > > Even HTTP/2 is not necessarily "safe" as we are seeing with the > deployment of compression dictionaries as there are enterprise mitm > devices that inspect HTTP/2 traffic as well (and in our case, reset > connections when they see a content-encoding they don't understand). > > The better question is under what circumstances do we want to allow > those devices to "break" and force them to fix the implementations? > HTTP/S (or just H/2/3 if you want to be less intrusive) could be > considered reasonable because the proxies are under the control of the > site (CDN) or environment where they are being run (enterprise) and > there's not random gear spread elsewhere in the Internet that needs to > be tracked down. The site-level is generally easy (don't use the new > features on a given site if the serving path doesn't support it) but > cleaning up the enterprise ecosystem can be a nightmare and a much > bigger case of whack-a-mole. > > The alternative (that Chrome uses for HTTP/3) is to only use the new > feature when the connection is TLS-anchored to a well-known trust root > (no middleboxes on the client end) but that is allowing some portion > of the Internet to continue to operate "broken" infrastructure. Maybe use an IPv6 EH for non-well-known trust roots to claim support? :) (only half-joking, but it might help improve EH support.)
Received on Saturday, 27 July 2024 15:01:36 UTC