W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Multi-GET, extreme compression?

From: Roberto Peon <grmocg@gmail.com>
Date: Sun, 17 Feb 2013 15:19:47 -0800
Message-ID: <CAP+FsNeWDBTBYJ0P-URbO5avbUno5etKid10RM+dRwDWAUys2w@mail.gmail.com>
To: James M Snell <jasnell@gmail.com>
Cc: Helge Heß <helge.hess@opengroupware.org>, HTTP Working Group <ietf-http-wg@w3.org>, Phillip Hallam-Baker <hallam@gmail.com>, Cyrus Daboo <cyrus@daboo.name>
MGET (or whatever batch request) implies you know all of what you're
requesting when you're requesting it, which is rarely the case.
As a result, my guess is that this won't solve the prioritization issue for
the browser.
-=R


On Sun, Feb 17, 2013 at 3:07 PM, James M Snell <jasnell@gmail.com> wrote:

> An mget that leverages server push in http/2 and individual cacheable
> response streams would be very interesting and could address at least some
> of the prioritization issues.
> On Feb 17, 2013 12:17 PM, "Helge Heß" <helge.hess@opengroupware.org>
> wrote:
>
>> On Feb 17, 2013, at 11:18 AM, Cyrus Daboo <cyrus@daboo.name> wrote:
>> > We added a multiget REPORT to CalDAV (RFC4791) and CardDAV (RFC6352)
>> which is used by clients when sync'ing a lot of resources (e.g., initial
>> account setup). The one major criticism has been lack of cacheability of
>> the individual resources included in the multiget.
>>
>> The other major criticisms being:
>> a) content needs to be XML encoded
>> b) only allows for GETs, not for other operations
>>
>> I'd also like to see a generic, HTTP-level BATCH request. Please lets not
>> do 'just' an MGET.
>>
>> hh
>>
>>
>>
Received on Sunday, 17 February 2013 23:20:14 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Sunday, 17 February 2013 23:20:17 GMT