Joachim Breitner's Homepage
vm.overcommit_memory = 2, vm.overcommit_ratio = 0
Do you know this experience: A program, in my case subversion, has a bug and starts to eat memory. You can not interact with your system any more, only watch the memory and swap run full (if you have a display for that). Then it takes a while, while the kernel kills the (hopefully right) program. Things start to move again, until they are fully recovered from the swap and you can continue your work. Or the kernel does somehow not kill the right program, and you are screwed.
During regular work, though, your swap is hardy ever needed. Only after a while, a few megabytes of never-used RAM is swapped out, to make space for using the RAM as a file cache.
I’d like to see the kernel not give out more memory to processes than there is physical memory, because that’s plenty for normal work, and if there is more requested, then that’s most likely wrong. But I still want the kernel to use the rest of the memory for caching files, and also move some unused RAM pages to the swap file.
Unfortunately, there does not seem to be a settings that achieves this directly. But if you happen to have the swap about the same size as your RAM, then these settings, when written to /etc/sysctl.d/vm.conf
, will do the job:
vm.overcommit_memory = 2
vm.overcommit_ratio = 0
The first one is to make sure the kernel does not hand out more memory than you tell it to, and the second is to make sure that it only hands out (swap size + 0 * RAM size) to processes.
Beware that things go wrong if you happen to have no swap any more for some reason, beause then the kernel will hand out zero memory! Therefore, you need to make sure that these settings are applied after swap was enabled. On a Debian machine, rename /etc/rcS.d/S30procps
to /etc/rcS.d/S37procps
. This would not be possible if you could also specify the ration of swap memory to be used. Then I could set that to zero and the RAM ratio to 100.
If anyone knows better ways to achieve this, I’m interested to hear them.
Update: For my qemu based armel package builder, this is not enough it seems. I’m now running it with overcommit_ratio = 50.
Comments
Maybe a ulimit for the virtual ram size is a better way to prevent programs from using up all memory...
Using normal heuristics, I push this over my normal RAM size, without free -m reporting less than 300M free (used in buffers).. which means that there is much more RAM left yet.
vm.overcommit_memory = 2
vm.overcommit_ratio = 100
Note that I heard that it can still run out of RAM because of some overhead with memory pages.
2. Do you understand that this way all memory could never be used? It's always committed more than used, and you're limiting committed amount to RAM, making useful amount much less than that.
I think it's better to disable overcommitting, but have swap to allow using all RAM and at the same time avoid OMM killing of normal program if another one runs amok.
Have something to say? You can post a comment by sending an e-Mail to me at <mail@joachim-breitner.de>, and I will include it here.
it is probably a good idea but the vm.overcommit_ratio parameter must be subjected to tuning, which is individual. So a more conservative setting but with some heuristics/tuning would be better.. to save the user some work, for me it's not worth it