W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Multi-GET, extreme compression?

From: Phillip Hallam-Baker <hallam@gmail.com>
Date: Sun, 17 Feb 2013 21:41:49 -0500
Message-ID: <CAMm+LwgksRiyMJz-rXrbZiuTzEYKSWNqHdBwW5mtWwUY+annLg@mail.gmail.com>
To: Helge Hess <helge.hess@opengroupware.org>
Cc: William Chan (陈智昌) <willchan@chromium.org>, James M Snell <jasnell@gmail.com>, Roberto Peon <grmocg@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, Cyrus Daboo <cyrus@daboo.name>
On Sun, Feb 17, 2013 at 8:38 PM, Helge Hess <helge.hess@opengroupware.org>wrote:

> On Feb 17, 2013, at 5:30 PM, William Chan (陈智昌) <willchan@chromium.org>
> wrote:
> > I'm having difficulty grokking this proposal. Can you describe more
> clearly how this would work with the web platform? Specifically, what kind
> of markup in a HTML document should cause a browser to use a MGET for a
> resource set as you describe it.
> ? e.g. <img>, <script>, CSS links etc.


> > Also, how does this work for HTTP/1.X? Since we'll be living in a
> transitional world for awhile, I'd like to understand how this allows for
> HTTP/1.X semantics backwards compatibility.
> An old server would return a 405 when the BATCH comes in, then the client
> needs to switch to performing the operations individually.

An old server would never understand the HTTP/2 stream format required to
deliver the multi-responses.

So the only case where HTTP/1.1 would be an issue would be in a HTTP/2 to
HTTP/1 gateway and there the gateway would have to either refuse to do an
MGET or break it down into separate requests.

Website: http://hallambaker.com/
Received on Monday, 18 February 2013 02:42:17 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:10 UTC