Re: Concepts to improve Http2.0

Hi,

I am going to redo the proposal and rework things slightly, with a
different take
on what I would like to improve, which would make things clearer.
By changing the focus of the document and then list knock effects.

I will also then include a whole bunch of stuff regarding all the scenarios
that
that people have object with, as to why this approach may be better.
We is all around transactions and security; back-end systems integration;
system failures; responsibility, UX.

Kind Regards,

Wesley Oliver

On Fri, Jul 29, 2016 at 5:19 PM, Matthew Kerwin <matthew@kerwin.net.au>
wrote:

> Hi, just a couple of points here:
>
> On 29 July 2016 at 21:49, Wesley Oliver <wesley.olis@gmail.com> wrote:
>
>> Sorry I missed that interpretation from the following
>> and the fact the life cycle state diagram didn't have that requirement in
>> it.
>>
>>
> ​The diagram is of the lifecycle of a stream; the initial SETTINGS is part
> of the lifecycle of the connection.​
>
>
>
>> ​<snip>
>>
>>
>> I can see why the intermediate proxies would have a problem and would
>> require a round trip.
>> However, intermediate proxies can should be allowed to modify settings
>> frames as the pass thought it,
>> downgrading the response to what the intermediately supports, which means,
>> their wouldn't need to be a round trip confirmation as the server would
>> always
>> know the highest supported settings.
>>
>> ​
> ​Settings are hop-by-hop, not end-to-end
> ;
> w
> hat a browser advertises to a proxy in a SETTINGS frame has little to no
> bearing on what the proxy advertises to the server
> ​, and *vice versa* in the other direction
> .
>
> And I think that's still fair enough. If a proxy is willing to buffer an
> entire stream and rearrange everything so it looks kosher then it doesn't
> matter if the downstream peer wouldn't have accepted the replayed
> messages/overriding trailers/whatever.
>
> That said, I still think there's a smell here. I'm going to go out on a
> limb, drawing on my years as a PHP developer, to say that the primary use
> case for this proposal is to allow the application developer to catch an
> error while generating a response, and change the :status from 200 to 500
> (or similar). In the best case the browser gets the 200 response straight
> away and starts receiving response body chunks as they're generated, as
> happens now without server-side buffering. However if something goes wrong,
> the browser ... what? Receives an EOF on the response, then gets a "hang
> on, replace all that with a 500", so it dumps the partially-rendered
> document and starts displaying the incoming error document? Surely that's
> not good UX. It feels to me like, if your application might throw such an
> exception mid-response, you'd be best buffering it yourself. If it's a
> cacheable response, you can at least then put in appropriate
> Expires/ETags/etc. headers and let a cache optimise subsequent requests for
> you (or even manually cache it yourself serverside.)
>
> Cheers
> --
>   Matthew Kerwin
>   http://matthew.kerwin.net.au/
>
> On 29 July 2016 at 21:49, Wesley Oliver <wesley.olis@gmail.com> wrote:
>
>> Hi,
>>
>> Sorry I missed that interpretation from the following
>> and the fact the life cycle state diagram didn't have that requirement in
>> it.
>>
>> 5 <https://tools.ietf.org/html/rfc7540#section-5>.  Streams and Multiplexing
>>
>>
>>
>>   The order in which frames are sent on a stream is significant.
>>       Recipients process frames in the order they are received.  In
>>       particular, the order of HEADERS and DATA frames is semantically
>>       significant.
>>
>>
>> Sections:
>>
>> 6.5 <https://tools.ietf.org/html/rfc7540#section-6.5>.  SETTINGS
>>
>>
>>    A SETTINGS frame MUST be sent by both endpoints at the start of a
>>    connection and MAY be sent at any other time by either endpoint over
>>    the lifetime of the connection.  Implementations MUST support all of
>>    the parameters defined by this specification.
>>
>>
>>
>> So typically their would be no problem in just using the SETTINGS frame
>> then,
>> to communicate that this functionality is support by the receiving peer.
>>
>> I can see why the intermediate proxies would have a problem and would
>> require a round trip.
>> However, intermediate proxies can should be allowed to modify settings
>> frames as the pass thought it,
>> downgrading the response to what the intermediately supports, which means,
>> their wouldn't need to be a round trip confirmation as the server would
>> always
>> know the highest supported settings.
>>
>> The client browser should support all previous downgraded settings values.
>>
>> This potentially may not fit with all existing settings, meaning we may
>> require
>> categorizing settings into classes or their behavior/side-affects. So
>> that certain settings may
>> be optimistically overridden by intermediaries.
>>
>> I will look into this a little later on which settings would be affected
>> by an optimistic composition
>> approach covered in sections *6.5.2 Defined Settings Parameters*
>>
>>
>> Kind Regards,
>>
>> Wesley Oliver
>>
>>
>>
>> On Fri, Jul 29, 2016 at 1:13 PM, Wesley Oliver <wesley.olis@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> As per the spesification I dont'
>>> see any requirement that the SETTINGS Frame has to be transmitted first.
>>>
>>>
>>> On Fri, Jul 29, 2016 at 10:58 AM, Cory Benfield <cory@lukasa.co.uk>
>>> wrote:
>>>
>>>>
>>>> On 29 Jul 2016, at 09:31, Wesley Oliver <wesley.olis@gmail.com> wrote:
>>>>
>>>> I see that the documentation say nothing about how the negotiation is
>>>> to happen.
>>>>
>>>>
>>>> In this case, a setting is necessary: a header field is not good
>>>> enough. This is because this functionality requires that all entities on
>>>> the connection (intermediaries too) understand the change this makes to the
>>>> H2 stream state machine. That works when transmitted on a SETTINGS frame
>>>> because each hop of the connection that is actually participating in the H2
>>>> connection needs to look at the SETTINGS frame and respond appropriately.
>>>> Header fields, however, may be passed through to the endpoint, which leads
>>>> to a situation where the client and server can both do this but the
>>>> intermediary cannot, and the intermediary mangles or otherwise terminates
>>>> the connection.
>>>>
>>>> Otherwise it would have to wait for the settings frame communication to
>>>> have proceed first,
>>>> which then introduce latency for client side and would result in the
>>>> server having to block
>>>> before it could response, clearly a degradation of performance.
>>>>
>>>>
>>>> The server needs to do this anyway. The start of a HTTP/2 connection
>>>> involves both parties sending SETTINGS frames. The server cannot receive
>>>> the first HEADERS frame without having previously received a SETTINGS from
>>>> the client that would be offering support for this functionality.
>>>>
>>>> Cory
>>>>
>>>>
>>>
>>>
>>> --
>>> --
>>> Web Site that I have developed:
>>> http://www.swimdynamics.co.za
>>>
>>>
>>> Skype: wezley_oliver
>>> MSN messenger: wesley.olis@gmail.com
>>>
>>
>>
>>
>> --
>> --
>> Web Site that I have developed:
>> http://www.swimdynamics.co.za
>>
>>
>> Skype: wezley_oliver
>> MSN messenger: wesley.olis@gmail.com
>>
>
>
>
> --
>   Matthew Kerwin
>   http://matthew.kerwin.net.au/
>



-- 
-- 
Web Site that I have developed:
http://www.swimdynamics.co.za


Skype: wezley_oliver
MSN messenger: wesley.olis@gmail.com

Received on Monday, 1 August 2016 06:09:58 UTC