- From: Mark Nottingham <mnot@mnot.net>
- Date: Fri, 11 Oct 2024 12:30:21 +1100
- To: Michael Toomim <toomim@gmail.com>
- Cc: Watson Ladd <watsonbladd@gmail.com>, ietf-http-wg@w3.org
Hi Michael, > On 11 Oct 2024, at 11:40 am, Michael Toomim <toomim@gmail.com> wrote: > > Some issues are solved with H2 and H3. For instance, isn't the problem with long-lived connections solved by H3's 0rt or 1rt Connection Migration? Doesn't this let a mobile device drop a connection ... and pick it up later, at a different IP address? They might help, but there are also more basic problems -- e.g., scaling to a very very large number of held connections, both for infrastructure (e.g., NAT) and for servers. We've come a long way in addressing these scaling issues, but if you want every browser holding a connection open to every web site it visits, that's likely still a challenge. > As for Browser Support— I don't think that's needed yet. SSE started with a polyfill. Browsers added native support later. Braid-HTTP has a full-featured Javascript polyfill: https://www.npmjs.com/package/braid-http. Developers can get devtools support via a browser extension: https://github.com/braid-org/braid-chrome. It adds a "Braid" tab to view and inspect resource history. This all works today! We're building apps. > > Browser support will help with two things: > • Performance (a native HTTP parser will outperform our Javascript polyfill) > • H2 framing (our polyfill library can only implement H1-style framing even with H2 and H3 connections) > But perhaps I'm missing something, because you wrote that continuing without Browser support is "not a great path to start with." It seems like a great path to me! Am I missing something? Assuming the constraint of not changing browsers has significant impact on the solution space -- you're likely to make tradeoffs/concessions that you wouldn't otherwise make without that constraint. In other words, such solutions are often very "hacky" -- they aren't good for the long term. If we're going to get browser support, we should design for that up front. If we're not, I'd suggest we should evaluate how well the constraints that imposes fit within the architecture. > You also wrote "I don't think we're going to see all changes on Web sites pushed over these mechanisms, for a variety of reasons." What are those reasons? I've been working on this for a while, and anticipate a world where most websites *do* push updates over these mechanisms, so I'd love to learn where you see things differently. Switching the Web's architecture from pull-based to push-based isn't something we should do trivially; pull-based has a *lot* of advantages, especially in terms of scaling. Holding a connection open to every web site you have an open tab for creates issues for server and network infrastructure (see above) as well as for mobile battery life. Because clients aren't always connected, the protocol will have to specify how to recover state reasonably. This could be very simple (upon reconnection, send the latest state if it's changed), or it could be more complex (keep n versions and send a diff). There are a lot of tradeoffs to discuss here, but in general push-based methods have a risk of creating more network traffic, CPU load, etc. I do think there's a place for push-based mechanisms on the Web, but wholesale replacement of the pull-based caching model is a pretty big leap. Augmentation is another matter; so are selective use cases like data feeds and notifications. Even then, as we saw with WebPush, there are lots of tradeoffs, so understanding the specific use cases is key. Cheers, -- Mark Nottingham https://www.mnot.net/
Received on Friday, 11 October 2024 01:30:30 UTC