- From: Piers O'Hanlon <p.ohanlon@gmail.com>
- Date: Mon, 25 Nov 2019 12:18:48 +0000
- To: Patrick McManus <mcmanus@ducksong.com>
- Cc: Lucas Pardue <lucaspardue.24.7@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
Hi Patrick, Thanks for your feedback. I'm assuming most cross-origin issues should be dealt with by CORS but were you concerned with that suggestion that the header could be contained in an OPTIONS response? I can see that since such a response is not subject to CORS - such as in a CORS pre-flight request - then we could drop that as an approach and just keep to using HEAD requests as a mechanism to obtain the Transport-Info. Let me know what if that doesn't address your issue or if you have any other concerns. Piers On Fri, 22 Nov 2019 at 21:41, Patrick McManus <mcmanus@ducksong.com> wrote: > > To the extent that this leaks information across origins to js its probably a problem too. > > On Fri, Nov 22, 2019 at 8:58 PM Piers O'Hanlon <p.ohanlon@gmail.com> wrote: >> >> Hi Lucas, >> >> Thanks for bring that up - as you say Nginx's default for >> http2_max_requests is 1000, although it can be changed (and appears to >> be done so for RPC applications). I'm not sure how many other server >> implementations do this? >> >> Firstly, we can detect this through of the use of the dstport >> parameter - as a new TCP connection would use a different port. >> Although it could potentially lead to temporary loss of information >> for a time as discussed below. >> >> Secondly, the affect of this would be only apparent each time the >> request count is exceeded - so say every 1000 requests - when a switch >> over would occur. When a switch over does occur then it depends on the >> comparative duration of the data responses of interest, versus how >> often one wants to perform parallel HEAD/OPTION requests for >> Transport-Info. So for the case where where the frequency of parallel >> requests is about the same then I think it shouldn't matter much as >> with most server systems these days the congestion control parameters >> are cached in the kernel so a subsequent connection to the same >> destination would be preloaded with the cached metrics so the >> Transport-Info header would contain these. In the case where there's a >> series of long running responses then it might be an issue as after >> switch over point there would also be two parallel TCP connections to >> the same point but they would exist separately for a longer period so >> potentially the metrics obtained via subsequent HEAD/OPTIONS could be >> different as the cwnd can be reduced for low volume flows, though >> these would generally not be used since the dstport would not match so >> there could be a loss of information in this case. >> >> Cheers >> >> Piers >> >> On Fri, 22 Nov 2019 at 10:51, Lucas Pardue <lucaspardue.24.7@gmail.com> wrote: >> > >> > Hi Piers, >> > >> > >> > >> > >> > On Fri, 22 Nov 2019, 17:58 Piers O'Hanlon, <p.ohanlon@gmail.com> wrote: >> >> >> >> Hi all, >> >> >> >> - “It only provides information per response which isn’t very often” >> >> We mention in the draft that with H2+ one can send and arbitrary number of requests (using OPTIONS/HEAD) to obtain more measurements responses per unit time. >> > >> > >> > I have an observation but not sure it belongs in a document. Implementations such as nginx have a soft max number of HTTP/2 requests before closing the connection (default 1000 last I checked). If you're trying to sample the transport info to frequently you may end ip blowing it up. This seems unfortunate because the client use case is designed around making smarter transport related decisions. >> > >> > Cheers >> > Lucas >> > >> >> >> >> We would welcome any more feedback by email and/or Github issues >> >> >> >> Thanks, >> >> >> >> Piers O'Hanlon >>
Received on Monday, 25 November 2019 12:19:08 UTC