admin管理员组

文章数量:1122832

I would like to estimate the amount of allocatable memory in a system. On its outset, the question is very platform-dependent and next to impossible to answer. The concrete use case hopefully makes a sensible answer possible.

Before diving in, I note the possibility of an X-Y-problem. The real question is guessing a sensible parallelism for compiling code.

  • Practically speaking, I'd like to know how much RAM can be allocated before something equivalent to Linux' out-of-memory killer kicks in. In case of a 32bit environment, the RAM may be distributed to multiple processes to avoid running into address space limits.
  • The method should work on Linux, but ideally there should be a fallback that also works on GNU/Hurd. An approach that passes both in principle is searching for MemTotal in /proc/meminfo. Other unixoid kernels would be nice, but are less important here.
  • On Linux, we tend to partition machines into containers. The resources of a container can be partitioned using control groups. For instance setting memory.max on the memory controller of a container cgroup2 would result in the OOM killer to be invoked sooner rather than later.
  • It is unclear how this would apply to memory ballooning in virtual machines.

Obviously, this can only be a guess and reality will differ. However, coming up with a guess would allow for more efficient resource consumption and avoiding builds getting killed several hours into them.

The baseline is MemTotal in /proc/meminfo. Can we do better?

本文标签: linuxEstimate the amount of RAM that can be allocatedStack Overflow