W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2014

Re: Large Frame Proposal

From: Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com>
Date: Tue, 8 Jul 2014 21:45:06 +0900
Message-ID: <CAPyZ6=KyHzxj+w0xsMGB3zbLzdiAbH3ZE=keeMwawgEsPyXu6g@mail.gmail.com>
To: Michael Sweet <msweet@apple.com>
Cc: Mark Nottingham <mnot@mnot.net>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Tue, Jul 8, 2014 at 9:15 PM, Michael Sweet <msweet@apple.com> wrote:

> Mark,
> On Jul 8, 2014, at 12:10 AM, Mark Nottingham <mnot@mnot.net> wrote:
> > Michael,
> >
> >
> > On 8 Jul 2014, at 12:06 pm, Michael Sweet <msweet@apple.com> wrote:
> >
> >> Mark,
> >>
> >> On Jul 7, 2014, at 8:58 PM, Mark Nottingham <mnot@mnot.net> wrote:
> >>>> ...
> >>>> Or at least one issue that the fixed max frame size cannot be tuned
> for any reason
> >>>
> >>> That's a design decision that was made a long time ago. To reconsider
> it, I need more than vague concerns that amount to "I don't like it"; I
> need concrete problems that it causes.
> >>
> >> For printing 16k frames/chunks cause a 2x performance drop.  And for
> video streaming you'll likely see similar issues (and stuttering) due to
> the volume of data involved. (see my other post on the subject)
> >
> > Out of curiosity - are your use cases *actually* using HTTP/2, or are
> you assuming some mapping of IPP to HTTP/2?
> I'm applying our experience with HTTP/1.1 chunk sizes to HTTP/2; as I
> mentioned in my original (more detailed) response yesterday, there are no
> HTTP/2-based IPP printers (yet), but that is one of the projects I am
> working on right now... Since IPP is just another POST-based RPC scheme
> (using the IPP binary message encoding vs. XML or JSON) the only real
> change here is to using HTTP/2 in place of HTTP/1.1 as the transport.
> The original message was in response to claims that smaller frame sizes
> will not adversely affect performance, but I provided experience with
> running code that we saw issues in HTTP/1.1 with smaller chunks and that a
> modest increase in chunk size provided  a dramatic performance improvement
> for all printers.  There is no reason to expect any different experience in
> HTTP/2, as the HTTP/2 frame overhead (8 bytes currently) is generally the
> same as the HTTP/1.1 chunking overhead ("3fff\r\n" + "\r\n" = 8 bytes)
>  with similar CPU processing requirements.
​I did some tests 16383 vs 65535 DATA payload performance in nghttp2
implementation on my desktop PC.
The result is 64K version performed better by 1%, which I think is
Maybe embedded devices like printers are far more sensitive.  For server
side software and desktop client, payload size is not so much concern, no?
 Are there anyone who actually measured these settings on their HTTP/2
proxy implementations?

Best regards,
Tatsuhiro Tsujikawa


> _________________________________________________________
> Michael Sweet, Senior Printing System Engineer, PWG Chair
Received on Tuesday, 8 July 2014 12:45:54 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 30 March 2016 09:57:09 UTC