W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2015

Re: HTTP/2 Upgrade with content?

From: Greg Wilkins <gregw@intalio.com>
Date: Fri, 13 Mar 2015 12:04:55 +1100
Message-ID: <CAH_y2NFV=Z7hqbtWTdiePRwUnhhRjiP8R_Ua7kmpZEkwXtxgEA@mail.gmail.com>
To: Mike Bishop <Michael.Bishop@microsoft.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Mike,

the requirement of 3.2 is different to a normal HTTP/1.1 request.
Normally there is no requirement for a server to buffer the entire request
body before commencing handing of a normal HTTP/1.1 request.    Typically
the application is called and a streaming API used to provide the content
to the request handler.   Note that this is not a block or not to block
question - modern HTTP/1.1 servers are perfectly capable of consuming
content via asynchronous IO.

Thus from the servers point of view, the memory commitment required to
service a single connection is the buffer size that it uses to read the
request content.   Now applications might then aggregate those buffers and
attempt to hold the entire content in memory, but that is an application
responsibility and the server cannot do much about that.

This requirement is different.  It is allowing an arbitrary sized body
content to be sent in the HTTP/1.1 request that must be held by the server
so that it can be fed to the HTTP/2 handling of the request that takes
place after the upgrade and after the 101 has been sent.

I guess technically the new HTTP2 connection could work in a mode where it
receives content as HTTP/1.1 but sends the response as HTTP/2.... but that
is a) stupidly complex for a mechanism that browser say they will not
implement anyway; b) just asking for a deadlock if the response is large
and requires flow control frames to be received that cannot be sent until
the entire body is consumed.

But you point about upgrades failing for all sorts of reasons is a valid
one.  So I agree that 413 is not the right response and that simply
ignoring the upgrade on requests that have too large bodies (or perhaps any
body at all for simplicity), is probably the right thing to do.

cheers






On 13 March 2015 at 10:47, Mike Bishop <Michael.Bishop@microsoft.com> wrote:

>  A server is blocked on the connection, just as it would be in HTTP/1.1.
> The situation for the server is no worse than if the client weren’t
> offering Upgrade.  Servers will enforce the same limits of large bodies
> they’re not willing to handle.  Issuing a 413, telling the client that the
> reason their request isn’t being serviced is due to the size of the body,
> is confusing if the actual way to make it be serviced is to omit the
> Upgrade header.  That’s a bizarre use of 413.
>
>
>
> My inclination would be for servers to ignore the Upgrade header if they
> don’t want to be blocked.  Clients will always need to handle Upgrade
> headers being ignored, since they can legitimately be stripped by
> intermediaries.  Clients will never understand all the reasons why some
> Upgrades work and some don’t – they have to handle both cases cleanly.
>
>
>
> *From:* Greg Wilkins [mailto:gregw@intalio.com]
> *Sent:* Thursday, March 12, 2015 4:10 PM
> *To:* HTTP Working Group
> *Subject:* HTTP/2 Upgrade with content?
>
>
>
>
>
> Section 3.2 describes the upgrade to HTTP/2 and it allows support for
> upgrade requests with bodies:
>
>    Requests that contain an entity body MUST be sent in their entirety
>
>    before the client can send HTTP/2 frames. This means that a large
>
>    request entity can block the use of the connection until it is
>
>    completely sent.
>
>  Servers will need to protect themselves from DoS attacks via such
> requests as buffering arbitrary large content in their entirety is a
> commitment that servers cannot generally give.
>
> Thus servers will have to limit the size of the entities they are prepared
> to hold in this situation (and the size of a single normal request buffers
> is probably the memory commitment they are prepared to make for any given
> connection).
>
>
>
> My question is, what should a server do if it receives an otherwise valid
> upgrade request that it could handle but with content that exceeds this
> memory limit?      Should it respond with a 413 REQUEST_ENTITY_TOO_LARGE or
> should it just ignore the upgrade and let the request be handled via
> HTTP/1.1 (which can stream the content into the request handler and it
> becomes somebody else's problem to limit memory usage).
>
> My problem with ignoring the upgrade is that it is an arbitrary limit and
> it will be hard for clients to tell why some upgrades work and others do
> not.
>
> Alternately my problem with 413 is that some servers might wish to avoid
> the whole upgrade with content path and thus send a 413 for any upgrade
> with content, which may break some clients that could otherwise proceed
> with HTTP/1.1
>
> thoughts?
>
>
>
> PS. in hindsight, I would rather that we had not allowed upgrades with
> content and instead told clients to upgrade with an OPTION request prior to
> any PUT/POST request.... gallop... gallop... gallop.... SLAM!
>
>
>
> --
>
> Greg Wilkins <gregw@intalio.com>  @  Webtide - *an Intalio subsidiary*
> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that
> scales
> http://www.webtide.com  advice and support for jetty and cometd.
>



-- 
Greg Wilkins <gregw@intalio.com>  @  Webtide - *an Intalio subsidiary*
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.
Received on Friday, 13 March 2015 01:05:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:43 UTC