- From: Seph Gentle <me@josephg.com>
- Date: Mon, 25 Jan 2021 23:00:25 +1100
- To: "Greg Wilkins" <gregw@webtide.com>
- Cc: "HTTP Working Group" <ietf-http-wg@w3.org>
- Message-Id: <a515db30-c26b-42f0-b95f-1cf4d50553cf@www.fastmail.com>
Great; good to know; much appreciated. I knew http2 had a different chunking method, but I was curious about whats appropriate for http1.1. I think the other danger would be gzip encoding. Because content is gzipped before its chunked, preserving chunk boundaries through gzip encoding would probably be tricky to implement on top of a lot of current HTTP server APIs. Anyway, we'll stick to implementing our own chunking behaviour on top of HTTP. Re: SSE, Server-sent events are quite well supported in the wild, but the browser's API for SSE has some serious limitations. The biggest issue is that it doesn't support passing authentication cookies. In an issue discussing the problem some chrome engineers said they support not fixing the issue because SSE can simply be reimplemented, better, on top of JS's fetch() api anyway. And if thats true, from the perspective of the server and the browser there's nothing special about server-sent events as a protocol. And no reason to use SSE in particular over something more specialized for the task. -Seph On Mon, Jan 25, 2021, at 9:23 PM, Greg Wilkins wrote: > I also do not think this is a workable solution. Various middleman's implemented with Jetty do not preserve chunk boundaries. Buffering and processing may add extra chunks, amalgamate chunks or even replace them with a content-length. Also chunking is not applicable to HTTP >= 2. > > There is server sent events... but not sure how well that is supported in the wild. We implement it, but have very little evidence of it being used. > > Websocket or long polling are your best bet for multiple sending events server to client. > > cheers > > > > > > On Mon, 25 Jan 2021 at 11:06, Seph Gentle <me@josephg.com> wrote: >> Hi everyone! >> >> I’m working with Mike and others to figure out & clean up the protocol for braid. We want to add real-time subscriptions to http. >> >> For this we need to send a stream of messages (values and patches) in response to a single http request. The simplest way to encode that would be to lean on transfer-encoding: chunked and wrap each patch in exactly one http “chunk”, so we don’t need to do our own message framing. >> >> I want to lean on some collective wisdom here. Is this a bad idea? Does anyone know if middleman proxy servers ever move chunk boundaries around? Is that valid according to the protocol? Is that something we should worry about? >> >> (Server sent events do their own message framing on top of the transfer encoding. Is there a good reason for that?). >> >> -Seph >> >> > > > -- > Greg Wilkins <gregw@webtide.com> CTO http://webtide.com
Received on Monday, 25 January 2021 12:01:03 UTC