W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014


From: Roberto Peon <grmocg@gmail.com>
Date: Fri, 18 Apr 2014 00:33:33 -0700
Message-ID: <CAP+FsNerQsY_3UfjaO0=3Jyuu7Ut-H7Hk13ML8SiqnNdwETPmg@mail.gmail.com>
To: "Roy T. Fielding" <fielding@gbiv.com>
Cc: Jeff Pinner <jpinner@twitter.com>, K.Morgan@iaea.org, HTTP Working Group <ietf-http-wg@w3.org>, C.Brunhuber@iaea.org
True, but sometimes we've run into hardware that just can't cope with
certain chunking, and nonetheless need to support it.
We ran into more of this quite recently as a matter of fact :(

More interesting to me, in any case, is that END_SEGMENT allows for
effective layering.

On Thu, Apr 17, 2014 at 10:35 PM, Roy T. Fielding <fielding@gbiv.com> wrote:

> On Apr 17, 2014, at 9:25 AM, Jeff Pinner wrote:
> > Consider this use case in HTTP/1.1. An API provides a stream of data
> using HTTP/1.1 chunked encoding. Each chunk contains a single record, and
> the length of this record is indicated by the chunk length. In HTTP/1.1 the
> length of these chunks was unconstrained, but when translating into HTTP/2,
> these chunks must be segmented into multiple data frames to fit within the
> frame size limit.
> >
> > If the chunk delineation was meaningful, then END_SEGMENT allows this
> meaning to be preserved.
> FWIW, that won't work in HTTP/1.1 (intermediaries and network libraries
> will consume and coalesce chunks before any application gets to see them).
> ....Roy
Received on Friday, 18 April 2014 07:34:09 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC