W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Multi-GET, extreme compression?

From: Helge Hess <helge.hess@opengroupware.org>
Date: Sun, 17 Feb 2013 18:10:21 -0800
Cc: James M Snell <jasnell@gmail.com>, Roberto Peon <grmocg@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, Phillip Hallam-Baker <hallam@gmail.com>, Cyrus Daboo <cyrus@daboo.name>
Message-Id: <2DA0834D-C0B7-4A26-B6AD-B5789D0CFA3B@opengroupware.org>
To: William Chan (陈智昌) <willchan@chromium.org>
On Feb 17, 2013, at 5:52 PM, William Chan (陈智昌) <willchan@chromium.org> wrote:
> I'm confused. We issue individual GETs for the individual resource URLs. How do we know to combine those individual resources into this magical /resource/set path?

Well, I personally don't care too much about HTML here but about services, but I do think you can use the facility for this too. The browser would need to do some clever batching and latency management, but thats not really related to HTTP but HTML and an issue in any protocol.
Fixing HTML would be a different thing, but sure, you could introduce resource-set tags which would directly map to batched requests.

Presumably you receive your HTML in a streamed fashion as packets arrive, presumably parsing a packet is way faster than any network traffic. In fact many resources will be in the first few packets aka the head (scripts, CSS). For CSS its even more condensed within one resource, you probably get a few URLs within a very short time.

> Furthermore, as I previously linked to in the very first reply to the thread, when we discussed MGET previously, I highlighted how the browser incrementally parses the document and sends GETs for resources as it discovers them.

Yes, you might want to wait n (3?) milliseconds before sending out additional requests and batch what you get within that timeframe. You don't really send out requests in realtime while parsing, do you? ;-)

> > Also, how does this work for HTTP/1.X? Since we'll be living in a transitional world for awhile, I'd like to understand how this allows for HTTP/1.X semantics backwards compatibility.
> An old server would return a 405 when the BATCH comes in, then the client needs to switch to performing the operations individually.
> So, you handwaved over how the client would magically transform URL1 + URL2 + URL3 into magical example.com/resource/set. Assuming that's possible, how do you do the reverse transformation, when a HTTP/2=>HTTP/1.X gateway needs to translate HTTP/2 MGET requests for this /resource/set into the individual GETs for the original URLs.

I can't follow you here. A BATCH of 5 GETs would exactly be the same like 5 individual GETs w/ less HTTP overhead and better compression. Its trivial to convert this in both directions.

> And even if this is possible, how reasonable is it to pay this roundtrip on receiving the 405? We've fought really hard to eliminate roundtrips.

Maybe I'm missing something, but I thought the goal is to reduce 10...N requests to 1 in the best case. That 10 requests are 11 in the legacy case seems to be fine to me, plus a browser could remember on which sites it has seen a 405 and avoid the hit in the future.

Received on Monday, 18 February 2013 02:10:49 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:10 UTC