Re: Prefer-Push, a HTTP extension.

On 19/11/2018 12:27, Evert Pot wrote:
> Hi everyone,
> 
> We're a group of people collaborating on a HTTP extension. We would like
> to introduce a new header that a HTTP client can use to indicate what
> resources they would like to have pushed, based on a link relationship.

...

> The work-in-progress draft can be read here:
> 
> <https://github.com/evert/push-please/>

I have always been a bit puzzled by how PUSH is supposed to be 
beneficial when the server doesn't know what the client has locally 
cached.  Nowadays versioned scripts such as those from 
ajax.googleapis.com are typically told to be cached locally for one year[1]

In the case "self" serves everything and all the assets have similar 
caching policies, after the first visit any PUSH stuff riding on dynamic 
HTML is going to be 99.99% wasted.

The draft doesn't seem to address:

  - why would this be beneficial compared to just sending n pipelined 
GETs on h2, if the client understands it wants n things already?  Both 
ways the return data has to be serialized into individual streams with 
their own headers on a single h2 connection.  With HPACK and n GETs that 
differ only in the request URL, the header sets for each request are 
cheap and you don't have to worry about either magicking up a new format 
to carry the info or "market penetration" of implementations.

The draft says with its method "it's possible for services to push 
subordinate resources as soon as possible" but it doesn't compare it to 
just doing n GETs from the start.  I think you find any advantage is 
hard to measure.  But at least the draft should fairly compare itself to 
the obvious existing way to do it.

  - where does the contemporary knowledge come from at the client about 
the relationships?  From the server, ultimately?  Then this is a bold 
claim...

 > It reduces the number of roundtrips. A client can make a single HTTP 
request and get many responses.

h2 pipelining doesn't work like h1 pipelining.  You can spam the server 
with requests on new streams and most (all?) servers will start to 
process them in parallel while serving of earlier streams is ongoing. 
The server cannot defer at least reading about the new stream starts on 
the network connection because it must not delay hearing about tx credit 
updates or it will deadlock.  So there is a strong reason for servers to 
not delay new stream processing.

-Andy

[1] "The CDN's files are served with CORS and Timing-Allow headers and 
allowed to be cached for 1 year."

https://developers.google.com/speed/libraries/

Received on Friday, 23 November 2018 23:38:01 UTC