RE: Concepts to improve Http2.0

Hi Wesley,

I had a look over your document.

Is the crux of your problem statement that you want to send out dynamically generated content as early as possible? Could your problem be solved by the use of chunked transfer encoding and  Trailers [1]? In HTTP/2 frame format, the simplest response would be a series of frames such as HEADERS, DATA, HEADERS (Trailers with END_STREAM flag). This is explained in more detail in RFC 7540 section 8.1 [2].

In the examples included in your document there are multiple “Dependent Resources” that get pushed. Are these independent static resources that the dynamic generated content refers to?

As far as my understanding goes the current protocol mechanisms should permit chunked transfer and push promises without needing to modify the stream life cycle. Pushed resources would sit in the client cache ready to be used by the dynamically generated content when it is received and parsed. In other words, you could achieve your proposed improvemed timing diagram with current mechanisms.




From: Wesley Oliver []
Sent: 27 July 2016 07:20
Subject: Concepts to improve Http2.0


I am not new to the concept of the IETF, however, I have yet to make an offical submission.

I would like to put forth a concept that can further improve the performance of http 2.0.
I have a couple of other concepts as well regarding content expiry headers which would affect http 1.1.
Additionally I would also like to look into concepts to prevent unnecessary push requests for content that is already cached by the browser. Since mobile bandwidth constraints, would be obviously benefit from not push content that is already cached.

Full document on the concept can be found  at the link below and first abstract can be found to follow this email.

If you could please advise as to the path to follow.

Kind Regards,

Wesley Oliver
Http Response Stream - Optimistic approach for performance improvement and Snowball effect of Response Body Programming paradigm shift of benefits


Traditionally in http 1.1 one is required to buffer an http response on the server side. If a change to the headers was to be made during the response somewhere during the page generation code, because headers are not allowed to be changed after the message-body has been transmitted. Changing these semantics by removing this constraint in http 2.0 will open the door to an http response programming paradigm shift in possibilities. Benefits, improved and optimal bandwidth utilization, reduce overall page render resource latency and potentially an increase in server page requests that can be processed.


Allow multiple response to be sent over the wire for the same request, whereby the last response that has been transmitted over the wire, will form the official response that will be permanently rendered in the client browser.

This is an optimistic approach, when the response will not change, therefore eliminating the need to buffer the response. As soon as network buffer has a full packet or has been forced flushed it can be transmitted over the wire, reducing the latency of the response experience by the client. Additionally it also allows for improved bandwidth utilization after the server has received the request, as it can immediately start sending response packets, reducing potentially wasted bandwidth during the time in which the response is being generated and then buffered before transmission.

Web Site that I have developed:

Skype: wezley_oliver
MSN messenger:<>


This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.


Received on Wednesday, 27 July 2016 10:22:04 UTC