There's no problem in normal operation yet, but if someone just use everything default, and don't care what happens to their system, then he/she will just use IOS on the phone, and will not be in this forum.
The exact reason you were asked to produce a specific example of the issue you are experiencing. There is no issue.
You seem to be making some assumptions about how things work and, facing the reality, you reject it as “done wrong”. And until you make an effort to climb out of perspective you forced upon yourself, you may actually never understand the situation.
Further, some OSes (e.g, Windows, Solaris) don't have this OOM-killer thing at all, and they works fine.
Because they expect programs to commit suicide. Instead of being killed, they are being forced to kill themselves; chosen even more randomly than the OOM killer would do it, because instead of a likely culprit being eliminated and some processes being protected, any unlucky process that tried to use more memory is facing that fate.
I don't know which happened first, but seems in Linux world, programs commit insane amount of memory as a habit, kernels tolerate and connive them, good cooperation right?
You have some weird ideas about what overcommit is, what it does and why it is being used.
Judging by the tone and content, I would guess you must have read some articles of self-appointed
prophets of overcommit catastrophe, who cover their lack of effort to understand the situation with unparalleled zeal and strong opinions. Isn’t that true?
Kernel doesn’t “tolerate” something. There is no bad behavior to be tolerated. It’s intentional feature that allows efficient memory management. It works faster, it wastes less memory, it improves system stability. That’s the reason behind it.
This is one particular thing Windows do better, by not over commiting, forcing those developers to claim only what they need. And if OOM do happens, just refuse giving out more. Much better and easier.
Windows — just like Linux — does overcommit. If it can’t provide a page, it even may show you a blue screen of death, murdering everything instead of a single process. Why you never heard of that? I suspect the causes are two. While Linux allows some flexibility with OOM killer and overcommiting policies, Windows offers no configuration and therefore no relevant documentation to raise people’s fears. “Out of sight, out of mind.” Second: due to how memory provision is performed in Windows (compared to Linux), different types of malfunction lead to that condition. In Windows it’s likely removal or corruption of medium containing the page file, which people intuitively consider to be “abnormal situation” and are fine with their process failing in such a scenario. In Linux those situations are equally abnormal and no one should expect processes to be able to work reliably if they happen, but understanding, why it is so, requires more mental effort and many not notice how bad they are. Being able to explicitly issue
MEM_COMMIT under Windows, which will give a reliable error status, may also give false impression that accessing the page will not fail: but it is not so, as
MEM_COMMIT doesn’t cause a page to be provided as one would expect: it only updates counters so commit calls from other processes will fail. The actual provision is done exactly the same way as in Linux: on first page access.