admin管理员组文章数量:1323318
Which of these is more suitable for handling file read/write operations in file manager type of node server application ?
Is one faster then the other? Speed is very important because the app is suppose to be able to handle many user requests at the same time
Which of these is more suitable for handling file read/write operations in file manager type of node server application ?
Is one faster then the other? Speed is very important because the app is suppose to be able to handle many user requests at the same time
Share asked Sep 2, 2020 at 16:12 AlexAlex 68.1k185 gold badges459 silver badges650 bronze badges4 Answers
Reset to default 5What makes streams unique, is that instead of a program reading a file into memory all at once like in the traditional way, streams read chunks of data piece by piece, processing its content without keeping it all in memory.
This makes streams really powerful when working with large amounts of data, for example, a file size can be larger than your free memory space, making it impossible to read the whole file into the memory in order to process it. That’s where streams e to the rescue!
Using streams to process smaller chunks of data, makes it possible to read larger files.
Streams basically provide two major advantages pared to other data handling methods:
- Memory efficiency: you don’t need to load large amounts of data in memory before you are able to process it
- Time efficiency: it takes significantly less time to start processing data as soon as you have it, rather than having to wait with processing until the entire payload has been transmitted
Which of these is more suitable for handling file read/write operations in file manager type of node server application?
Both are usable for a node server applications. However, both request and response in HTTP implementation in node are stream-based, which means the stream-based approach is more flexible in terms of dealing with large I/O operations.
Is one faster then the other? Speed is very important because the app is supposed to be able to handle many user requests at the same time.
There's strong evidence that stream is better at memory usage and time. I'll borrow some examples from Node.js Design Patterns - Second Edition: Master best practices to build modular and scalable server-side web applications Chapter 5 - Coding with Streams
Buffer-approach:
const fs = require('fs');
const zlib = require('zlib');
const file = process.argv[2];
fs.readFile(file, (err, buffer) => {
zlib.gzip(buffer, (err, buffer) => {
fs.writeFile(file + '.gz', buffer, err => {
console.log('File successfully pressed');
});
});
});
The result will be normal, but when trying to a file with over 1GB of size. We'll experience this error.
RangeError: File size is greater than possible Buffer: 0x3FFFFFFF bytes
With the same file for stream-approach:
const fs = require('fs');
const zlib = require('zlib');
const file = process.argv[2];
fs.createReadStream(file)
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream(file + '.gz'))
.on('finish', () => console.log('File successfully pressed'));
Imagining if a node process has to handle the concurrent of 100 requests, each request trying to upload the file size up to 100MB. This node.js process simply will hold all the buffer of file binary in memory and your server simply suffer memory leak.
For time efficiency, stream helps the data processed by chunk, so it definitely increases the speed
Disclaimer: most of the examples and image are from Node.js Design Patterns - Second Edition - Chapter 5, I don't own any of this material, and this is for education purpose only.
The fs
and fs.promises
modules are equally suitable.
Both of the modules provide operations which have synchronous and asynchronous forms.
They only differ in the way they handle the pletion of these operations. Where fs
uses callbacks to do this, fs.promises
obviously uses promises.
What it es down to is programming style. For one, promises can help you avoid the classic callback hell.
Sorry but your question is a bit ambiguous: it is not clear if the doubt is about promise based version of fs
or the std callback based version or about fs.createRead/WriteStream
versus fs.open
.
Even if I think the question is about the second let me spend a few words about the first.
There is absolutely no performance difference between the two, probably the promise version requires a bit more memory and CPU, but it is something really negligible. From an engineering point of view the promise version opens you the doors to the async/await
syntax, which can help saving development time (it's not saving performances, but it saves something).
Once again the differences between accessing the files with the two approaches are related to other factors, not performances: the file descriptor approach is closer to the OS implementation of files access but your range of action is limited to the use of fs.read
and fs.write
; while the stream approach uses the Stream
over-structure which for sure will requires some more memory and CPU (something again negligible) but it has many powerful interfaces to save developer time. In other answers I read about the buffering problem, but that's not true: with the file descriptor approach we are not forced to read/write files all at once, we are free to chunk our IO operations in multiple fs.read/write
s.
Once said that, probably, if you need to do some simple file access (a low number of read/write calls in your code) you don't need use streams, while if you have to do several file access through/from several consumers/providers you'll find many benefits from the stream interface.
本文标签: javascriptfscreateReadWriteStream() versus fspromisesopen()Stack Overflow
版权声明:本文标题:javascript - fs.createReadWriteStream() versus fs.promises.open() - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1742140285a2422551.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论