W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Multi-GET, extreme compression?

From: James M Snell <jasnell@gmail.com>
Date: Sun, 17 Feb 2013 18:32:31 -0800
Message-ID: <CABP7Rbd6NYnR=-JE21SFZ3mrZXopuZ1h3jh9Lp=nJdBAW1rF9g@mail.gmail.com>
To: ChanWilliam(陈智昌) <willchan@chromium.org>
Cc: Roberto Peon <grmocg@gmail.com>, Phillip Hallam-Baker <hallam@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, Helge Hess <helge.hess@opengroupware.org>, Cyrus Daboo <cyrus@daboo.name>
On Feb 17, 2013 5:52 PM, "William Chan (陈智昌)" <willchan@chromium.org> wrote:
> On Sun, Feb 17, 2013 at 5:38 PM, Helge Hess <helge.hess@opengroupware.org>
>> On Feb 17, 2013, at 5:30 PM, William Chan (陈智昌) <willchan@chromium.org>
>> > I'm having difficulty grokking this proposal. Can you describe more
clearly how this would work with the web platform? Specifically, what kind
of markup in a HTML document should cause a browser to use a MGET for a
resource set as you describe it.
>> ? e.g. <img>, <script>, CSS links etc.
> I'm confused. We issue individual GETs for the individual resource URLs.
How do we know to combine those individual resources into this magical
/resource/set path?

If they are in individual img, script, stylesheet elements, you would do a
single mget with each distinct URL listed. I presume that a new element
representing a single resource set would be needed for the other case.

> Furthermore, as I previously linked to in the very first reply to the
thread, when we discussed MGET previously, I highlighted how the browser
incrementally parses the document and sends GETs for resources as it
discovers them. Since my browser does not have a crystal ball telling it
that more resources are coming and when they are coming, the browser simply
issues GETs as soon as it can (subject to some other constraints like DNS
and available TCP sockets and some weak attempts at resource scheduling).

The approach for this would be to list the grouped resources as early in
the page as possible. It is definitely not without it's problems.

>> > Also, how does this work for HTTP/1.X? Since we'll be living in a
transitional world for awhile, I'd like to understand how this allows for
HTTP/1.X semantics backwards compatibility.
>> An old server would return a 405 when the BATCH comes in, then the
client needs to switch to performing the operations individually.

Fwiw that's not what I had in mind at all.

If the mget specifies separate URLs, the transforming proxy simply splits
those out into distinct get requests. Otherwise it translates it onto a
single get and hopes for the best. The server will either do the right
thing or return a 404.

Mget would not necessarily be backwards compatible and a client ought to
only use it if the origin server supports it.

- James

> So, you handwaved over how the client would magically transform URL1 +
URL2 + URL3 into magical example.com/resource/set. Assuming that's
possible, how do you do the reverse transformation, when a HTTP/2=>HTTP/1.X
gateway needs to translate HTTP/2 MGET requests for this /resource/set into
the individual GETs for the original URLs. And even if this is possible,
how reasonable is it to pay this roundtrip on receiving the 405? We've
fought really hard to eliminate roundtrips.
>> hh
Received on Monday, 18 February 2013 02:32:59 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:10 UTC