- From: Patrick McManus <pmcmanus@mozilla.com>
- Date: Mon, 12 Aug 2013 20:28:00 -0400
- To: James M Snell <jasnell@gmail.com>
- Cc: Roberto Peon <grmocg@gmail.com>, Martin Thomson <martin.thomson@gmail.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
- Message-ID: <CAOdDvNo_ZRHw8wk1O18C7P++Zu8QOu_Pv+nfr3DFOs6O2KPPbQ@mail.gmail.com>
headers that don't fit in the base frame, especially with compression, are an outside edge case. Its an important edge case to have an answer for so that we can maintain HTTP/1 gateways and such cases do indeed exist, but from a performance standpoint its definitely an uninteresting edge case. It doesn't make sense to me to create an exception in order to optimize that. If we thought it was more than an edge case then we would need to rethink the multiplexing rules for them, rather than carving out exceptions for large frames, because massive headers will interfere with prioritization. But in the end its not worth it because the continuation frame is really just there for correctness. -P On Mon, Aug 12, 2013 at 8:17 PM, James M Snell <jasnell@gmail.com> wrote: > Hmm... I'm not so sure that "having smaller sizes means our > subroutines run more often" is really all that great of a > justification for a protocol design. Also, I'm not sure I follow what > you're saying with regards to DATA frames given that those don't have > anything to do with CONTINUATION (unless I'm missing something). > > Right now, if I have 49,149 octets worth of header data, I'm required > to write out three frames, with 24-bytes of overhead; with this change > I would write out two frames with 16-bytes of overhead. Sure, it's > only an 8-byte difference but I still doubt "squashing bugs" is really > a significant consideration here. > > On Mon, Aug 12, 2013 at 5:08 PM, Roberto Peon <grmocg@gmail.com> wrote: > > The secondary benefit of having a limit small enough to deal with > > serialization delays for DATA frames is that the continuation stuff will > be > > regularly-enough exercised for HEADERS that we'll have confidence that it > > works. > > This is a significant benefit-- getting bugs out of something like this > is > > not a small thing in terms of importance. The overhead here is pretty > > insignificant... > > -=R > > > > > > On Mon, Aug 12, 2013 at 4:57 PM, James M Snell <jasnell@gmail.com> > wrote: > >> > >> On Mon, Aug 12, 2013 at 4:32 PM, Martin Thomson > >> <martin.thomson@gmail.com> wrote: > >> > On 13 August 2013 00:11, James M Snell <jasnell@gmail.com> wrote: > >> >> If the END_HEADERS flag is not > >> >> set, we ought to allow frame sizes up to the maximum allowed (65,535) > >> >> to eliminate this additional overhead. > >> > > >> > Interesting observation. Would you make the same allowance for > >> > HEADERS and PUSH_PROMISE too? Those are actually more likely to need > >> > this. > >> > > >> > >> No, I would keep HEADERS and PUSH_PROMISE as is.. I considered that, > >> but we really don't want to encourage use of big headers as you say.. > >> a HEADERS or PUSH_PROMISE for HTTP ought to be limited to 16k and > >> ought to require CONTINUATION frames for anything above that limit, > >> but the CONTINUATION frames ought to be allowed to extend to 65k to > >> avoid the additional encoding overhead ... this balances the concerns > >> and ought not cause any problems (the code we have for parsing > >> continuation frames would actually get to omit an if statement... > >> ;-)...) > >> > >> - James > >> > >> > That said, I'm not certain that I want this, it seems like an another > >> > set of if() statements that could be avoided. After all, we > >> > determined that the framing overhead was tolerable. And it's not like > >> > we want to *encourage* the use of big headers. > >> > > > >
Received on Tuesday, 13 August 2013 00:28:27 UTC