[whatwg/fetch] Add request Content-Length to PerformanceResourceTiming entries (Issue #1777)

### What problem are you trying to solve?

We would like the ability to display an upload throughput indicator in our UI (IE: 13.1 Mbps).  We are using the [@azure/storage-blob](https://www.npmjs.com/package/@azure/storage-blob) SDK to upload files directly to an Azure Storage Blob account.  Throughput calculations require: 1) Start time, 2) duration, 3) payload size in bytes.  The SDK internally splits files into chunks and then uploads each chunk with separate `fetch` calls.

The `PerformanceResourceTiming` API has multiple properties related to the **response** that can be used to measure download performance.  But there is not a standard place that records the size of the **request** payload in order to measure uploads.

It seems clear (perhaps only to me) that request payload size is a key metric that really belongs at the standards level.  Third party libraries such as the Azure SDK I linked above certainly *should* try and expose this information.  But having this metric at the standards level enables developers to fill in the functionality gaps of third party libraries.  It also isn't really a stretch to imagine how observability platforms would benefit from being able to record and visualize bottlenecks and issues for file uploads.

### What solutions exist today?

Current solutions to calculate throughput require the ability to measure the request payload size when initiating the `fetch` call and then find the corresponding `PerformanceResourceTiming` entry.  This can be difficult to achieve in practice because it requires directly measuring the size of the payload stream (or serialized JSON, or whatever) and then looking up the corresponding timing entry to get the start time and duration.  It's even more difficult if the `fetch` call happens deep inside a third party library.

Some solutions that come to mind:

- Monkey patch the `fetch` API so that I can get payload size.  Then attempt to correlate that request to a `PerformanceResourceTiming` entry.  ***yuck***
- Ditch the third party library I'm using for uploads, and manually implement file splitting.  ***yuck***
- Try and convince my coworkers and higher-ups that in the year 2024 - calculating upload throughput is just too hard.  This is the approach I'm going with for now.

### How would you solve it?

*Waves a magic wand:* Add a property to `PerformanceResourceTiming` entries called `requestContentLength` that is hydrated from the Content-Length header on the request.

*Waves Dumbledore's Elder Wand:* Add an API to track `fetch` progress updates in the browser [Performance APIs](https://developer.mozilla.org/en-US/docs/Web/API/Performance_API).  This might be a bit over-ambitious but it sure would be nice to finally have a standard way to measure upload (and download) progress and throughput.  Perhaps this solution isn't realistic.

### Anything else?

This feature is about getting a key piece of information for calculating throughput.  The actual throughput calculation itself is also very difficult, but is outside the scope of this request.  Library authors and application developers looking to calculate throughput will quickly see how deep the rabbit hole goes, and avoid feature requests for throughput indicators.

This feature is attempting to simplify (even if only by a little bit) what is already a difficult task.

Some discussions I've found that are relevant:

- w3c/resource-timing#102
- #491
- [PerformanceResourceTiming API](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceResourceTiming)
- My feature request to the Azure team: Azure/azure-sdk-for-js#31122

-- 
Reply to this email directly or view it on GitHub:
https://github.com/whatwg/fetch/issues/1777
You are receiving this because you are subscribed to this thread.

Message ID: <whatwg/fetch/issues/1777@github.com>

Received on Wednesday, 25 September 2024 16:11:41 UTC