admin管理员组文章数量:1355619
I have a large monolithic application that, when run with --trace-gc
, I see a lot of Scavange ... allocation failure
, and the memory size increasing incrementally from ~20MB to ~300MB.
I know malloc()
s are slow-ish in C, so if I can do less of those, my application should start up faster, right?
How do I tell V8 to start with a larger initial memory allocation (300MB instead of ~20)?
Looking at
node --v8-options
I found these:
--initial-heap-size
--initial-shared-heap-size
--min-semi-space-size
--semi-space-growth-factor
but changing either of these doesn't impact the observed behavior with --trace-gc
.
(node -v
yields v22.13.0
)
I have a large monolithic application that, when run with --trace-gc
, I see a lot of Scavange ... allocation failure
, and the memory size increasing incrementally from ~20MB to ~300MB.
I know malloc()
s are slow-ish in C, so if I can do less of those, my application should start up faster, right?
How do I tell V8 to start with a larger initial memory allocation (300MB instead of ~20)?
Looking at
node --v8-options
I found these:
--initial-heap-size
--initial-shared-heap-size
--min-semi-space-size
--semi-space-growth-factor
but changing either of these doesn't impact the observed behavior with --trace-gc
.
(node -v
yields v22.13.0
)
2 Answers
Reset to default 1(V8 developer here.)
How do I tell V8 to start with a larger initial memory allocation (300MB instead of ~20)?
As far as I know, there's no way to force preallocation of a large empty heap.
I know
malloc()
s are slow-ish in C, so if I can do less of those, my application should start up faster, right?
There are no malloc()
s involved here, so their speed or lack thereof has no impact on your scenario. When V8 grows its heap, it uses mmap()
(or the respective OS's equivalent) in fairly large chunks, so growing to a few hundred MB should be pretty fast.
when run with
--trace-gc
, I see a lot ofScavenge ... allocation failure
That is entirely normal and no reason to be concerned. The "young generation" fills up quickly (because it's small) and is quick to collect (because it's small) and usually that's a great way to efficiently deal with large numbers of short-lived objects.
In rare cases, it may be beneficial to step in with manual optimization attempts, but that's the exception, not the rule.
For example, if you do find that you have too many Scavenger runs in a performance-critical section of your application where every millisecond matters, you can try to allocate fewer short-lived objects in this section. Chances are that will not only reduce time spent on GCs but also time spent there in general.
Also, if you find that your Scavenger runs take significantly longer than 0-2 milliseconds each, then you may inadvertently be creating an object structure that runs into the slow path(s) of the generational heap strategy. How to resolve that depends on the specifics; usually your best bet is once again to allocate fewer short-lived objects.
But, again: running into any of these cases is rare. I've seen it happen, so I can't claim that it never happens, but I do insist that it's not something that most applications ever need to worry about.
Also, a general reminder: the first step of improving performance is to use a profiler to find out where time is being spent, so that you can focus your efforts on an area where you might actually see a difference.
How do I tell V8 to start with a larger initial memory allocation (300MB instead of ~20)?
WebAssembly.Memory
is one option. Making use of resizable ArrayBuffer
is another. The former only grows when grow()
is called. The latter can be resized to grows, and resized to 0
when the given operation is complete.
Here's using WebAssembly.Memory
that only grows, to process real-time audio https://github/guest271314/quictransport/blob/main/audioworklet-webassembly-memory-grow/quicTransportAudioWorkletMemoryGrow.js
const initial = (384 * 512 * 3) / 65536; // 3 seconds
const maximum = (384 * 512 * 60 * 60) / 65536; // 1 hour
let started = false;
let readOffset = 0;
let init = false;
const memory = new WebAssembly.Memory({
initial,
maximum,
shared: true,
});
console.log(memory.buffer.byteLength, initial / 65536);
// ...
await readable.pipeTo(
new WritableStream({
start() {
console.log('writable start');
},
async write(value, controller) {
console.log(value, value.byteLength, memory.buffer.byteLength);
if (readOffset + value.byteLength > memory.buffer.byteLength) {
console.log('before grow', memory.buffer.byteLength);
memory.grow(3);
console.log('after grow', memory.buffer.byteLength);
}
const uint8_sab = new Uint8Array(memory.buffer);
let i = 0;
if (!init) {
init = true;
i = 44;
}
for (; i < value.buffer.byteLength; i++, readOffset++) {
uint8_sab[readOffset] = value[i];
}
if (readOffset > 384 * 512 && !started) {
started = true;
aw.port.postMessage({
started: true
});
}
},
close() {
aw.port.postMessage({
readOffset
});
console.log(readOffset, memory.buffer.byteLength);
},
},
new ByteLengthQueuingStrategy({
highWaterMark: 384 * 512 * 3
})
)
);
Here's an example of starting with a resizable ArrayBuffer
that has initial 0
byte length, resizes to 1 MB, then resizes to 0
.
https://github/guest271314/NativeMessagingHosts/blob/main/nm_host.js#L48-L72
const buffer = new ArrayBuffer(0, { maxByteLength: 1024 ** 2 });
const view = new DataView(buffer);
// ...
async function* getMessage() {
let messageLength = 0;
let readOffset = 0;
for await (let message of readable) {
if (buffer.byteLength === 0 && messageLength === 0) {
buffer.resize(4);
for (let i = 0; i < 4; i++) {
view.setUint8(i, message[i]);
}
messageLength = view.getUint32(0, true);
message = message.subarray(4);
buffer.resize(0);
}
buffer.resize(buffer.byteLength + message.length);
for (let i = 0; i < message.length; i++, readOffset++) {
view.setUint8(readOffset, message[i]);
}
if (buffer.byteLength === messageLength) {
yield new Uint8Array(buffer);
messageLength = 0;
readOffset = 0;
buffer.resize(0);
}
}
}
本文标签: nodejsHow to set initial memory size in node 22Stack Overflow
版权声明:本文标题:node.js - How to set initial memory size in node 22? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1743991264a2572181.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论