W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Multi-GET, extreme compression?

From: Dr Robert Mattson <robert@mattson.com.au>
Date: Mon, 18 Feb 2013 11:47:32 +1100
Message-Id: <238FD677-096D-4971-B1CC-D75045E4DF49@mattson.com.au>
Cc: Helge Heß <helge.hess@opengroupware.org>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>, Phillip Hallam-Baker <hallam@gmail.com>, Cyrus Daboo <cyrus@daboo.name>
To: James M Snell <jasnell@gmail.com>
Hey Guys,

What do you think of the request compression and response multiplexing scheme advocated by HTTP-MPLEX?

Rob

Sent on a mobile device, typos expected.

On 18/02/2013, at 10:07 AM, James M Snell <jasnell@gmail.com> wrote:

> An mget that leverages server push in http/2 and individual cacheable response streams would be very interesting and could address at least some of the prioritization issues.
> 
> On Feb 17, 2013 12:17 PM, "Helge Heß" <helge.hess@opengroupware.org> wrote:
>> On Feb 17, 2013, at 11:18 AM, Cyrus Daboo <cyrus@daboo.name> wrote:
>> > We added a multiget REPORT to CalDAV (RFC4791) and CardDAV (RFC6352) which is used by clients when sync'ing a lot of resources (e.g., initial account setup). The one major criticism has been lack of cacheability of the individual resources included in the multiget.
>> 
>> The other major criticisms being:
>> a) content needs to be XML encoded
>> b) only allows for GETs, not for other operations
>> 
>> I'd also like to see a generic, HTTP-level BATCH request. Please lets not do 'just' an MGET.
>> 
>> hh
Received on Monday, 18 February 2013 00:48:02 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 18 February 2013 00:48:05 GMT