- From: Andrea Giammarchi <notifications@github.com>
- Date: Fri, 26 Jul 2024 01:03:19 -0700
- To: whatwg/encoding <encoding@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
Received on Friday, 26 July 2024 08:03:23 UTC
Did some extra test to verify if the buffer creation is the reason for such slowdown and indeed this proves it:
**new buffer each time**
```js
"use strict"
let input = require("../input")
let encoder = new TextEncoder()
module.exports = () => {
// size as worst case scenario
const ui8Array = new Uint8Array(input.length * 4);
return encoder.encodeInto(input, ui8Array).written;
}
```
This is still faster than `encode(input).byteLength`:
```
./benchmarks/textencoder.js: 3’329’442.3 ops/sec (±174’287.7, p=0.001, o=0/100)
```
Now, if there is no new buffer creation at all:
```js
"use strict"
let input = require("../input")
let encoder = new TextEncoder()
// size as worst case scenario
const ui8Array = new Uint8Array(input.length * 4);
module.exports = () => {
return encoder.encodeInto(input, ui8Array).written;
}
```
The result is better than code points loop:
```
./benchmarks/textencoder.js: 23’922’510.0 ops/sec (±547’321.7, p=0.001, o=4/100) severe outliers=3
```
I suppose a method to just count bytes length would make it possible to have performance closer to NodeJS buffer.
--
Reply to this email directly or view it on GitHub:
https://github.com/whatwg/encoding/issues/333#issuecomment-2252193819
You are receiving this because you are subscribed to this thread.
Message ID: <whatwg/encoding/issues/333/2252193819@github.com>
Received on Friday, 26 July 2024 08:03:23 UTC