Re: Multi-GET, extreme compression?

Helge et al,

If you'd like to pursue this, I'd suggest that the folks who are interested get together and make a more complete proposal, e.g., as an Internet-Draft. Developing an approach like this on-list isn't going to use your time or others' well.

Thanks,


On 18/02/2013, at 1:10 PM, Helge Hess <helge.hess@opengroupware.org> wrote:

> On Feb 17, 2013, at 5:52 PM, William Chan (陈智昌) <willchan@chromium.org> wrote:
>> I'm confused. We issue individual GETs for the individual resource URLs. How do we know to combine those individual resources into this magical /resource/set path?
> 
> Well, I personally don't care too much about HTML here but about services, but I do think you can use the facility for this too. The browser would need to do some clever batching and latency management, but thats not really related to HTTP but HTML and an issue in any protocol.
> Fixing HTML would be a different thing, but sure, you could introduce resource-set tags which would directly map to batched requests.
> 
> Presumably you receive your HTML in a streamed fashion as packets arrive, presumably parsing a packet is way faster than any network traffic. In fact many resources will be in the first few packets aka the head (scripts, CSS). For CSS its even more condensed within one resource, you probably get a few URLs within a very short time.
> 
>> Furthermore, as I previously linked to in the very first reply to the thread, when we discussed MGET previously, I highlighted how the browser incrementally parses the document and sends GETs for resources as it discovers them.
> 
> Yes, you might want to wait n (3?) milliseconds before sending out additional requests and batch what you get within that timeframe. You don't really send out requests in realtime while parsing, do you? ;-)
> 
>>> Also, how does this work for HTTP/1.X? Since we'll be living in a transitional world for awhile, I'd like to understand how this allows for HTTP/1.X semantics backwards compatibility.
>> 
>> An old server would return a 405 when the BATCH comes in, then the client needs to switch to performing the operations individually.
>> 
>> So, you handwaved over how the client would magically transform URL1 + URL2 + URL3 into magical example.com/resource/set. Assuming that's possible, how do you do the reverse transformation, when a HTTP/2=>HTTP/1.X gateway needs to translate HTTP/2 MGET requests for this /resource/set into the individual GETs for the original URLs.
> 
> I can't follow you here. A BATCH of 5 GETs would exactly be the same like 5 individual GETs w/ less HTTP overhead and better compression. Its trivial to convert this in both directions.
> 
>> And even if this is possible, how reasonable is it to pay this roundtrip on receiving the 405? We've fought really hard to eliminate roundtrips.
> 
> Maybe I'm missing something, but I thought the goal is to reduce 10...N requests to 1 in the best case. That 10 requests are 11 in the legacy case seems to be fine to me, plus a browser could remember on which sites it has seen a 405 and avoid the hit in the future.
> 
> hh
> 
> 

--
Mark Nottingham   http://www.mnot.net/

Received on Monday, 18 February 2013 02:24:08 UTC