Re: Prefer-Push, a HTTP extension.

On 11/23/18 1:12 AM, Mark Nottingham wrote:
> Hi Evert,
> 
> Just some personal thoughts. 
> 
> Although folks are generally not as optimistic about Push as they used to be, I think there's still some appetite for considering it for non-browsing use cases; as such, this is potentially interesting.
> 
> However, it's not clear at all how it'd work to me. Push is effectively hop-by-hop; intermediaries need to make a decision to send it, and that decision needs to be informed. How will a generic intermediary (like a CDN) know what link relations like "item" and "author" are, and how to apply them to a particular connection? 
> 
> It's likely that a server (whether an intermediary or an origin) is going to have many, many potential representations of these types to send, so it'll need to have knowledge of the format to be able to get those links out and push the appropriate responses. That doesn't seem generic -- unless you require all of the potentially pushable links to appear in Link headers (which doesn't seem like it would scale very well).

I potentially see this used in 2 ways:

* Through link headers
* By sending links in response bodies, such as HTML, HAL and ATOM.

The draft, as it's written it is agnostic about where the links appear.

The origin server will generally know which resources to send based on
these links, and link-relationships appearing in Prefer-Push would only
apply to links for which the context uri is the request uri.

In the case of Link headers, intermediaries could cache which links were
available and push optimistically. I agree though if links are in the
body, it's unlikely proxies will parse and push those. I think that's OK.

I'm not entirely sure about the potential scale concern. Is the concern
mainly about the appearance of many Link headers? Intuitively this makes
sense to me. However, for links appearing in response bodies; usually
they are already exist there so we're not really sending any additional
bytes.

> 
> Pushing *all* of the resources -- or even a significant subset -- is likely to cause performance problems of its own, as there will be contention with any other requests made for bandwidth. This is one of the big things we've learned about push in the browsing use case, and I suspect it applies here as well.

I think one of the use-cases we need to test is for example, a JSON
collection that contains "n" items. Each is represented as a resource,
and linked to the collection with an "item" link relationship.

In the past each resource may have been 'embedded' in the response body
of the collection resource, but now we're pushing them. We'll need to
benchmark to find out for what value of "n" this starts breaking.

Frankly, I have no idea even what order of magnitude "n" is in. My hope
is that once we know, we can determine a few best practices for this
feature. It's still possible that we'll find that it's just not possible
to get reasonable performance out of this, but we intend to find out.

Anyway, I'm still curious to learn a bit more about the performance
problems about doing many pushes. Is there mainly an issue with having
too many parallel H2 streams? Could it be solved by limiting the number
of parallel pushes?

If this has to do with the number of streams, I image the issue also
exist with having an similar number of parallel GET requests. But maybe
there's something specific to Push.

Evert

> 
> Cheers,
> 
> 
>> On 19 Nov 2018, at 3:27 pm, Evert Pot <me@evertpot.com> wrote:
>>
>> Hi everyone,
>>
>> We're a group of people collaborating on a HTTP extension. We would like
>> to introduce a new header that a HTTP client can use to indicate what
>> resources they would like to have pushed, based on a link relationship.
>>
>> This might look something like the following request header:
>>
>> GET /articles
>> Prefer-Push: item, author
>>
>> or
>>
>> GET /articles
>> Prefer: push="item, author"
>>
>> We see this feature being especially useful for hypermedia-style APIs.
>> Many of these types of APIs have some feature to embed resources in
>> other resources in a way that is ignored by HTTP caches.
>>
>> The work-in-progress draft can be read here:
>>
>> <https://github.com/evert/push-please/>
>>
>> My questions:
>>
>> 1. Would this group be interested in adopting this draft and bringing
>>   through the standards process?
>> 2. We're having some discussions around which HTTP Header is more
>>   appropriate. I'm curious if anyone here has any thoughts on that. The
>>   main drawback is using "Prefer" is that it requires parsing a nested
>>   format, but it might be more semantically appropriate for HTTP.
>> 3. Our group is mostly divided on one issue: whether this header should
>>   allow a client to request pushes of arbitrary depth. The cost would
>>   be increased complexity (thus a higher barrier to entry). I'm curious
>>   if anyone here has any insights that would help us make this
>>   decision.
>>
>> Arbitrary-depth example with a custom format:
>>
>>  Prefer-Push: item(author, "https://example.org/custom-rel"), icon
>>
>> Example with S-expression syntax:
>>
>>  Prefer: push="(item(author \"https://example.org/custom-rel\") icon)"
>>
>> In each of the above cases the client request the server push:
>>
>> 1. The resource(s) behind the item link-relationship
>>   a. The resources(s) behind the author relationship (via the "item"
>>      link-relationship).
>>   b. The resource(s) behind the "https://example.org/custom-rel" (via
>>      the "item" link)
>> 2. The resource(s) behind the icon relationship
>>
>> Unfortunately structured-headers doesn't support data-structures of
>> arbitrary depth, so if we want arbitrary-depth pushes, we would need to
>> pick a different format. Very open to suggestions here too.
>>
>> We intend to have several working implementations of this. For those
>> interested in discussing, most of our discussion  is happening on a
>> slack instance (http://slack.httpapis.com channel: #push-please).
>>
>> Evert et al.
>>
> 
> --
> Mark Nottingham   https://www.mnot.net/
> 
> 

Received on Friday, 23 November 2018 22:11:50 UTC