Re: HTTP Partial POST Replay

On 29/06/19 6:02 am, Alan Frindell wrote:
> Hi, I submitted an individual draft describing our work on HTTP Partial
> POST Replay.  I initially presented this work at the HTTP Workshop in
> April. 
> 
> https://datatracker.ietf.org/doc/draft-frindell-httpbis-partial-post-replay/
> 
> The TL;DR is that when a webserver behind a cooperating intermediary
> wants to shut down but has received only part of a POST request, it can
> return that request in the response, and the intermediary will pick a
> different server to handle it.  This process is transparent to the client.
> 
>  
> 
> Any comments, questions or other feedback welcome!
> 


Overall impression is that this is a overly complex and resource
expensive replacement for the very simple 307 status mechanism. There is
no harm in telling a client it has to retry its reuqest.



* The stated case of "server wants to shutdown" does not correlate well
with the fact that to use this mechanism the server has to store and
re-deliver all the initial request bandwidth back to the intermediary.

* That re-sending is where the bandwidth issue is coming from. The
initial request uses N bytes to arrive from the client, if M of those
are a) delivered to each of D servers and b) received back from the
initial D-1 servers, and c) deliver to the second server. That makes a
total bandwidth consumption of (N + (D-1)*M).
  Whereas with 307 only consumes (N + M).


Also keep in mind that even a blind intermediary just pushing data to a
single server is handling twice the traffic that server does. That is
the minimum best-case situation. With multiple servers and/or clients
the difference increases rapidly to orders of magnitude more resource
consumption. That is the existing situation, before this feature even
starts to force more resource consumption.



* All this re-sending of data could delay the server shutdown an
unreasonable amount of time. Turning what would be a few seconds into
minutes or even hours. Depending on the intermediary load being
reasonable is not a good idea.

* Every millisecond of delay added by the re-receive and re-send of data
makes it more likely the client will terminate early. If that happens
all this time, bandwidth, memory, and CPU cycles spent are completely
wasted.


Consider the case of a system which is undergoing a DoS at the
public-facing interface of the intermediary. Enacting this feature is a
huge resource expenditure for an already highly loaded intermediary.



* Section 2.1 says "The server MUST have prior knowledge"

Yet no mechanism(s) are even hinted at how a server may acquire such
knowledge. Defining a specific negotiation signal would be far better
for this and avoid a huge headache with implementations choosing
different signals and mechanisms for negotiating that knowledge.


* The Echo- or Pseudo-Echo mechanism is very clunky. I believe it to be
unlikely that any intermediary implementing this feature is unable to
simply store the initial request headers for re-use as needed.



AYJ

Received on Saturday, 29 June 2019 16:01:19 UTC