- From: Kazuho Oku <kazuhooku@gmail.com>
- Date: Tue, 20 Sep 2016 10:34:23 +0900
- To: Martin Thomson <martin.thomson@gmail.com>
- Cc: Tom Bergan <tombergan@chromium.org>, HTTP Working Group <ietf-http-wg@w3.org>
Hi, 2016-09-16 9:03 GMT+09:00 Martin Thomson <martin.thomson@gmail.com>: > Hey Tom, > > This is a good bug. I think that we built the exclusive thing without > considering that possibility. The problem is that there is no knowing > what streams the sender of priority frame knew about when they send > the PRIORITY frame. > > The same problem exists for PRIORITY sent by the server, but that's > far less of a problem because we don't see anyone using that (or if > they do, we don't agree on what that means). > > A simple "fix" here is to prohibit PRIORITY+exclusive from affecting > streams that haven't been "mentioned" by the endpoint sending > PRIORITY. That is, each endpoint maintains a watermark (for both odd > and even stream numbers) that increases every time they receive a > frame from their peer and any stream number below that is assumed to > be known to the other side. That means tracking mentions though, > which is a little cumbersome. I would like to point out that having that kind of method would also be beneficial for server-side performance. RFC 7540 section 5.3.4 suggests: an endpoint SHOULD retain stream prioritization state for a period after streams become closed. The longer state is retained, the lower the chance that streams are assigned incorrect or default priority values. This suggestion is vague and is hard to implement. Ideally, server needs to maintain the prioritization states of closed streams at least for one RTT so that it would not receive PRIORITY frames from client that is relative to the state of a closed stream. However, it is hard to find out the correct RTT. Therefore, a cautious server implementation would try to retain the prioritization states of most-recently-used streams up to MAX_CONCURRENT_STREAMS. By doing so, one can get rid of the risk to receive a PRIORITY stream relative to the state of a closed stream, since a client would never try to open more streams at once than MAX_CONCURRENT_STREAMS. However, trying to retain the prioritization states of streams as much as MAX_CONCURRENT_STREAMS may become a performance bottleneck as you try to more precisely distribute the bandwidth between the streams. That said, there would be other ways than using a watermark to solve the issue. For example, we could introduce frames for indicating the server that the client has seen a stream getting open, or closed, or the priority being changed. For the push case, a client would send such frame as it receives PUSH_PROMISE. Server can use the ordering between the such frames and PRIORITY frame to find out the client's view of the prioritization tree at the moment the client has sent the PRIORITY frame. For the case described in this mail, a client would send such frame at it sees a stream being closed. Server can use the frame to discard the prioritization state of the closed streams. > --Martin > > On 15 September 2016 at 16:31, Tom Bergan <tombergan@chromium.org> wrote: >> I believe the following example has a race: >> >> 1. Server receives request A. >> 2. Server sends PUSH_PROMISE frames for B, C, D. >> 3. Client sends PRIORITY to make B the exclusive parent of C >> 4. Server receives that PRIORITY frame. >> >> After step 2, the server's local priority tree is A -> {B,C,D}, due to the >> default priority rules in RFC 7540 Section 5.3.5. At step 4, the server >> cannot know if the client has received the PUSH_PROMISE for D yet. This >> makes it ambiguous whether the client intended the tree to be A -> B -> >> {C,D} or A -> {B->C, D}. I believe this race can happen any time a >> PUSH_PROMISE and exclusive PRIORITY can pass each other on the wire. I think >> the race is impossible for non-exclusive PRIORITYs. >> >> This is the simplest example of the race, but it's admittedly strange that >> the client would send that specific PRIORITY frame. Here's slightly more >> complex, but less strange example: >> >> 1. Client requests A then B, where B's HEADERS frame makes A the exclusive >> parent of B >> 2. Server receives request A >> 3. Server sends PUSH_PROMISE frames for C, D. >> 4. Server receives request B >> 5. Client receives PUSH_PROMISE frames for C, D >> >> At each step, the priority trees are updated like this: >> >> 1. Client's tree is A -> B >> 2. Server's tree is A >> 3. Server's tree is A -> {C,D} >> 4. Server's tree is A -> B -> {C,D} >> 5. Client's tree is A -> {B,C,D} >> >> If steps 3 and 4 are swapped, the client and server will finish with the >> same priority tree. As-is, they finish with different priority trees. >> >> Does that sound right? If so, is this worth mentioning in some kind of >> errata or other note? Apologies if this was brought up previously -- I >> searched through the mailing list archives and did not see a mention. >> >> -Tom > -- Kazuho Oku
Received on Tuesday, 20 September 2016 01:34:56 UTC