Who knows me is aware that my rating about linux is lowering, indeed main cause of the lowering of the rate is the Memory Management.
As in linux-2.0 the last version of the kernel has the bad habit to show its "killing instinct".
Allocation of memory in linux for performance reasons is "overstimed", i'll not enter in the deep, but seems that as default linux OVERBOOKS memory like air companies do with their airplanes, now until the statistics helps we can have substantial gain from this, otherwise the cute penguin once ended the space avaiable becomes a bit more harsh than a air company, KILLS.
This duty is done by OOM-Kill, the memory management algoritms unable to accomplish their job become like a killer: chooses more or less randomly between the most uncomfortable algoritms and make them an offer that they can't refuse: SIGKILL.
This strategy could be affordable when objected of a KILL is an apache's child, with other kind of applications (like RDBMS) this can become a real problem.
The episode that sincerelly has hurted me is when the OOM-Killer, probably triggered by an hungry browser, has SIGKILLed the screensaver leaving my workstation at disposal of the general audicence, come back from the coffee machine and see your SSH terminals on critical server there in foreground is not the kind of experience i would repeat.
This allocation strategy isn't born with linux and was already used on commercial unices: IBM AIX 2.3.xx has experienced the same kind of problems and Big Blue is comed back on its steps.
Basically, the Linux Kernel Developers not only had reinvented the wheel, but they reinvented the square wheel.
On the newsgroups you can found the most various informations and much suggest correctly how to disable the OOM-Killer with the command: sysctl -w vm.oom-kill=0
Sadly, starting from that moment the OOM-Killer comes as expected switched off but in this way in the best case the SIGKILL will reach the first process that will use the already allocated memory or instead of a NULL from the malloc request.
Thing not well know or difficult to reach are the different settings of the kernel variable vm.overcommit_memory that choices between the different allocation strategy available:
#define OVERCOMMIT_GUESS 0
#define OVERCOMMIT_ALWAYS 1
#define OVERCOMMIT_NEVER 2
The first option is an heuristic algorithm; linux following statistics will try to manage at best the overbooking of the memory, but if we are unlucky..
Second option will make your kernel as a door-to-door seller, will say you that the memory is always available: one million of MB for all !
Third option will allocate memory until is available and does not overcommit.
Making the final sum, sysctl -w wm.overcommit_memory=2 is IMHO the most conservative option and the candidate best DEFAULT setting.
In the hope that developers will learn to allocate only the memory needed and manage the errors correctly, i wish that at least the Linux developers will reform.
Who lives of hope, desperate dies.
PS
Option 2 should be used with the setting sysctl -w vm.overcommit_ratio=0 to make more civilized the penguin.
nel 2.6.28
sysctl -w vm.overcommit_memory=2
sysctl -w vm.overcommit_ratio=0
?
grazie