W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Multi-GET, extreme compression?

From: (wrong string) 陈智昌 <willchan@chromium.org>
Date: Sun, 17 Feb 2013 17:30:32 -0800
Message-ID: <CAA4WUYioRAOEbjU32yEaJuWDAySiZF=OfKXcF-8esqTP0uqwtQ@mail.gmail.com>
To: James M Snell <jasnell@gmail.com>
Cc: Roberto Peon <grmocg@gmail.com>, Helge He <helge.hess@opengroupware.org>, HTTP Working Group <ietf-http-wg@w3.org>, Phillip Hallam-Baker <hallam@gmail.com>, Cyrus Daboo <cyrus@daboo.name>
I'm having difficulty grokking this proposal. Can you describe more clearly
how this would work with the web platform? Specifically, what kind of
markup in a HTML document should cause a browser to use a MGET for a
resource set as you describe it.

Also, how does this work for HTTP/1.X? Since we'll be living in a
transitional world for awhile, I'd like to understand how this allows for
HTTP/1.X semantics backwards compatibility.


On Sun, Feb 17, 2013 at 4:58 PM, James M Snell <jasnell@gmail.com> wrote:

> A requester does not necessarily need to know everything they are
> getting in advance.
>
> That is, assume that a server defines a set of N resources and assigns
> that set a singular url that represents the entire collection. When I
> do..
>
>   MGET /resource/set HTTP/2.0
>
> The server responds by opening N server-push response streams back to
> the client, each associated with the original MGET. Each would have
> it's own Content-Location and Cache-Control mechanisms allowing
> intermediate caches to still do the right thing. The client does not
> necessarily know what all it is getting from the server in advance but
> knows it needs to be prepared to handle multiple items.
>
> Alternatively, the MGET could include multiple individual URLs, in
> which case it still actually behaves the same way. The only difference
> is that the client has a better understanding of exactly what it's
> wanting to retrieve.
>
> example:
>
> MGET /assets/*.js HTTP/2.0   --> Get all the javascript files
> MGET /assets/*.png HTTP/2.0  --> Get all the image files
>
> If the cache is able to keep track of exactly which resources were
> pushed in response to the MGET, then caches could keep right on doing
> the right thing.
>
> Since the resources are sent via server push, the server is
> responsible for determining the priority for which resources get sent.
>
> Yes, largely theoretical and easily abused for sure. But ought to at
> least be something worth investigating.
>
> - James
>
> On Sun, Feb 17, 2013 at 3:19 PM, Roberto Peon <grmocg@gmail.com> wrote:
> > MGET (or whatever batch request) implies you know all of what you're
> > requesting when you're requesting it, which is rarely the case.
> > As a result, my guess is that this won't solve the prioritization issue
> for
> > the browser.
> > -=R
> >
> >
> > On Sun, Feb 17, 2013 at 3:07 PM, James M Snell <jasnell@gmail.com>
> wrote:
> >>
> >> An mget that leverages server push in http/2 and individual cacheable
> >> response streams would be very interesting and could address at least
> some
> >> of the prioritization issues.
> >>
> >> On Feb 17, 2013 12:17 PM, "Helge He" <helge.hess@opengroupware.org>
> >> wrote:
> >>>
> >>> On Feb 17, 2013, at 11:18 AM, Cyrus Daboo <cyrus@daboo.name> wrote:
> >>> > We added a multiget REPORT to CalDAV (RFC4791) and CardDAV (RFC6352)
> >>> > which is used by clients when sync'ing a lot of resources (e.g.,
> initial
> >>> > account setup). The one major criticism has been lack of
> cacheability of the
> >>> > individual resources included in the multiget.
> >>>
> >>> The other major criticisms being:
> >>> a) content needs to be XML encoded
> >>> b) only allows for GETs, not for other operations
> >>>
> >>> I'd also like to see a generic, HTTP-level BATCH request. Please lets
> not
> >>> do 'just' an MGET.
> >>>
> >>> hh
> >>>
> >>>
> >
>
>
Received on Monday, 18 February 2013 01:31:00 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 18 February 2013 01:31:03 GMT