Re: Method Mania

I was using "safe" in the narrow scope of methods.  Patrick's point may
still apply if middle boxes only support pre-defined methods.

To avoid mixing connection semantics, a subscribe request (regardless of
http method), could return subscription information that includes how the
client receives notifications.
For example, it could return a websocket address that the client connects
to in order to receive a stream of updates. In the case of HTTP/3, a
server-client stream could be set up.
This is how the SUBSCRIBE method in gena/UPNP work.  It just sets up the
subscription and notification paths, but doesn't handle the notifications
themselves.

If the GET (or any preexisting) method is used, perhaps a content type
could be listed in the Accept header such as
application/subscription+json.  If the server receives that, it could
respond with a braid defines JSON message that contains the relevant
notification stream.



On Sat, Jul 27, 2024 at 8:04 AM Soni L. <fakedme+http@gmail.com> wrote:

>
>
> On 2024-07-27 11:44, Patrick Meenan wrote:
>
>
>
> On Sat, Jul 27, 2024 at 4:23 AM Julian Reschke <julian.reschke@gmx.de>
> wrote:
>
>> On 26.07.2024 00:27, Josh Cohen wrote:
>> > On the httpwg agenda at IETF 120 were a proposal for a new QUERY method
>> > and Braid, which has subscription functionality that overloads the GET
>> > method.
>> >
>> > What I am curious about is if, at this point in the evolution of the
>> > web, it is now safe to add new methods for new functionality. I've been
>> > reading up on HTTP/2/3 and it seems that nowadays, connections are
>> > end-to-end secure and are essentially tunneled through middle boxes,
>> > including HTTP/1.1 proxies. I'm still just wrapping my head around
>> > MASQUE, but it looks like it can handle arbitrary methods.  Similarly
>> > origin servers have evolved to support arbitrary methods.
>>
>> It always has been "safe", when https was used.
>
>
> https is not "safe" in practical terms because of middleboxes that
> intercept the connections. It is very common in enterprise deployments
> where they install local trust anchors on the client devices and use mitm
> software to inspect the traffic.
>
> Even HTTP/2 is not necessarily "safe" as we are seeing with the deployment
> of compression dictionaries as there are enterprise mitm devices that
> inspect HTTP/2 traffic as well (and in our case, reset connections when
> they see a content-encoding they don't understand).
>
> The better question is under what circumstances do we want to allow those
> devices to "break" and force them to fix the implementations? HTTP/S (or
> just H/2/3 if you want to be less intrusive) could be considered reasonable
> because the proxies are under the control of the site (CDN) or environment
> where they are being run (enterprise) and there's not random gear spread
> elsewhere in the Internet that needs to be tracked down.  The site-level is
> generally easy (don't use the new features on a given site if the serving
> path doesn't support it) but cleaning up the enterprise ecosystem can be a
> nightmare and a much bigger case of whack-a-mole.
>
> The alternative (that Chrome uses for HTTP/3) is to only use the new
> feature when the connection is TLS-anchored to a well-known trust root (no
> middleboxes on the client end) but that is allowing some portion of the
> Internet to continue to operate "broken" infrastructure.
>
> Maybe use an IPv6 EH for non-well-known trust roots to claim support? :)
>
> (only half-joking, but it might help improve EH support.)
>


-- 

---
*Josh Co*hen

Received on Saturday, 27 July 2024 21:40:36 UTC