admin管理员组文章数量:1122832
I would like to estimate the amount of allocatable memory in a system. On its outset, the question is very platform-dependent and next to impossible to answer. The concrete use case hopefully makes a sensible answer possible.
Before diving in, I note the possibility of an X-Y-problem. The real question is guessing a sensible parallelism for compiling code.
- Practically speaking, I'd like to know how much RAM can be allocated before something equivalent to Linux' out-of-memory killer kicks in. In case of a 32bit environment, the RAM may be distributed to multiple processes to avoid running into address space limits.
- The method should work on Linux, but ideally there should be a fallback that also works on GNU/Hurd. An approach that passes both in principle is searching for
MemTotal
in/proc/meminfo
. Other unixoid kernels would be nice, but are less important here. - On Linux, we tend to partition machines into containers. The resources of a container can be partitioned using control groups. For instance setting
memory.max
on the memory controller of a container cgroup2 would result in the OOM killer to be invoked sooner rather than later. - It is unclear how this would apply to memory ballooning in virtual machines.
Obviously, this can only be a guess and reality will differ. However, coming up with a guess would allow for more efficient resource consumption and avoiding builds getting killed several hours into them.
The baseline is MemTotal
in /proc/meminfo
. Can we do better?
本文标签: linuxEstimate the amount of RAM that can be allocatedStack Overflow
版权声明:本文标题:linux - Estimate the amount of RAM that can be allocated - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1736311449a1934742.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论