Re: Existing implementations of the resumable upload draft

Hi Marius, WG,

Some time ago I wrote a server capable of resumable uploads [1], and although not an implementation of this particular draft, I found some general lessons on resuming HTTP requests that I’d like to share. In particular, there’s a certain subset of HTTP requests that are much simpler to resume, and far less complicated to implement.

To start, I think someone in the meeting asked about techniques of implementation: I’m only aware of ~4 ways to implement the server logic of resumable uploads in distributed HTTP servers, that’s compatible with any HTTP request (in particular, POST):



1. Whole-request buffering: A gateway server, with attached network storage, listens for requests and redirects resumable requests to a temporary buffer. Once the upload is complete, the gateway pipes the buffer downstream to the application.

Pros:
• Compatible with existing application servers. CDNs could offer this as a feature to offer to customers without any modification to the origin server.
• Easy to deploy even to multiple load-balanced gateways, so long as they share a connection to the storage, and there is some sort of mechanism for acquiring and breaking locks.

Cons:
• Large performance penalty to clients that support resumable uploads: The application server won't process the request until the complete request body has been received by a gateway.
• Difficult to track upload progress; validation errors cannot be discovered until the whole upload is sent off, and the client may not receive any feedback from the server for some time after.



2. Sticky concatenation: A gateway server selects a single origin server, if there's a cluster to receive the request. In the event of an interruption, the gateway holds the downstream request open and waits for a resumption from upstream. When a gateway receives a continuation, it connects the resumed payload to the original request. (Note how any node downstream of the load balancer can join the requests, including the origin server, as long as the client sessions are sticky, but it’s simplest to bundle these features in the same node.)

Pros:
• Requires no special support from origin servers.
• Generic, likely the most efficient method for general deployment.

Cons:
• If the resuming request is directed to a different gateway in a cluster, then that gateway must forward the request to the original gateway, which can forward the request along the same connection to the original origin. Or, the origin must support a protocol that permits the source of the connection to be handed off to another source (Multipath TCP?).
• High susceptibility to DDoS attacks; an attacker merely has to start a large number of resumable requests and deliberately interrupt them in order to exhaust open request threads.
• The most brittle from an unreliable link between the gateway and the origin server.



3. Hybrid/edge processing: The gateway processes some logic itself, writing the request body as a blob into the database. Upon completion, the gateway forwards an empty notification to the application server, informing it that new data is available in the database that requires further indexing and processing. The application server could respond perform initial verification of the payload right away, and return a status code, then perform any additional indexing or tasks in the background.

Pros:
• An optimization of whole-request buffering, but still gives a noticeable, large penalty to clients simply for supporting resumable uploads.

Cons:
• The exact protocol for notifying the origin server and configuring the edge gateway is unspecified.
• Highest potential for obscure bugs, of the four options here. It requires many logic branches that will only be executed when a client requests a resumable upload, and these branches are typically authority- or origin-specific (rather than generic code on a gateway).



4. Database-side state machine: The application itself is built to have no (opaque) internal state, but instead any application state is stored in the database, or held by the client (e.g. build into the resume-request URL), or is otherwise recoverable by some mechanism (serialization/synchronization of process memory, distributed transactions, or other dark magic).

Pros:
• Requires no special handling by any intermediary, can be deployed behind CDNs that have no knowledge of resumable requests.

Cons:
• Handling is tightly integrated with the application/origin server logic, and must be re-implemented for every different resource type.
• Continuously flushing application state to a central database is extremely difficult to debug and test, and very prone to inadvertent errors. Effectively deploying this solution may demand a comprehensive application server framework, or specialized programming language.

—

My experiment above deployed (2), where multiple requests are concatenated together, streamed to the application as a single request, as if the original client was never interrupted.

However I found that all of the solutions have significant drawbacks, either from a user-experience standpoint (significant latency when resumable uploads is requested), or an implementation complexity standpoint (the necessity of numerous seldom-used logic and error paths, with great potential for latent bugs).

Many of these drawbacks are because resumable uploads tries to support every type of request, including POST. However for a plurality of the use cases, there is a much simpler solution: Resource-level resumption. This is where the client reads the server’s state directly, and determines how to recover from the incomplete state transition. This is an enhancement to the state-transition guarantees that HTTP already specifies; for example, GET and PUT (and QUERY?) are idempotent; this indicates (is supposed to indicate) that a failed request may be retried by a client, transparently to the user. Among other techniques for describing to clients how to recover from an incomplete state transition, there’s a particularly effective one for merely uploading a large file:

Segmented resource upload: When the client merely needs to upload a large file, it may use hybrid of PUT and/or PATCH requests to synchronize the server’s representation with its own. First, if desired, the user-agent allocates a resource in a POST request, and the origin server replies with a location to PUT the remainder of the upload to (perhaps a content-addressable URL); then the payload body can be fired off as a series of PATCH or PUT requests to this target URL. If the connection is interrupted, it will need to make a HEAD request to probe for the state of the resource, and make additional requests as necessary. Or the client can simply re-try the upload, perhaps uploading smaller segments this time around. (Since PUT is idempotent, there’s few incorrect ways to converge the server state back to the client’s expectation.)

Pros:
• Parallel upload is possible (multiple clients can upload different parts of the same resource at the same time).
• Generally stateless—resume-upload and new-upload requests follow the same logic branches.
• Cacheable—caches may invalidate the target resource as necessary, or potentially even update the cache in-place once it sees a 200 response to a PATCH.
• Somewhat easier to test—there’s no resumption-specific error branches on the server to be covered; and the client executes logic depending on the server’s state, not any particulars about how the network failed or its own state.
• Does not require 1xx status code support (however, user-agents may still benefit from 1xx messages indicating how much data the origin server has committed to durable storage, so that the user-agent has the option to de-allocate that storage on its end—a necessary feature for indefinitely long live streams, for example.)
• No gateway/CDN support is specifically required.

Cons:
• No support for POST. However, many requests are easily adapted to use PUT.
• Origin server must support multiple methods to transition state (in this case, PUT and PATCH must each result in the same origin server final state).
• The durable storage must track the status of partly-uploaded resources (as opposed to a gateway).

—

When I was originally building my experiment, this feature (the ability to upload a resource in segments) is the reason that I split off the “byteranges” patch media type from the concept of resumable uploads in general. I figured the majority of requests can simply use PATCH to complete an upload; then for the remainder, it is a useful building block for generic resumable uploads: If a client can upload a file, then it can also treat a request as a resource unto itself, to be uploaded and synchronized by the same mechanism.

I understand if a different mechanism would be better for describing the offset and completion of the upload (within this resumable uploads protocol), in the grand scheme of things; however I hope this illuminates why I proposed specifying “application/byteranges” as the PATCH media type to use; and I’d like to point out that for a certain subset of requests that already have state-transition guarantees (especially PUT requests), there may be an even simpler protocol to consider deploying.

Thanks,

Austin Wright

[1] <https://github.com/awwright/http-progress <https://github.com/awwright/http-progress>>


> On Jul 25, 2023, at 15:28, Marius Kleidl <marius@transloadit.com> wrote:
> 
> Dear working group,
> 
> last month Jonathan Flat and Guoye Zhang already shared their implementations for the resumable upload draft on this mailing list. I just compiled it alongside with other existing implementations into one repository at https://github.com/tus/draft-example <https://github.com/tus/draft-example>.
> 
> It includes a list of servers and clients, instructions on how to use them, sample code, and a table showing the interoperability between these projects. In total, there are clients for iOS, browsers and the command line. Servers are written in Swift, Go and .NET.
> 
> I hope that this helps to test and validate our draft as well as to ensure that we all on the same page when working on the draft. Feel free to let me know if you have any feedback or other implementations!
> 
> Best regards
> Marius

Received on Thursday, 27 July 2023 01:50:48 UTC