Computing > General Computing

Several quanstions about Linux and memory management

(1/6) > >>

Hi guys.
One thing I really hate about Linux is the OOM-killer and memory-over-commit, it makes me feel unsettled, unknown which process will get a sudden murder in next second. (an accurate joke here:
I tried to disable over-commit, but any program won't start when there's more than 1GB free RAM. I guess all programs for Linux just make use of this stupid design and commit as much as the developer can ever imaging.
So I came up another idea: keep over-commit disabled, use ZRAM as swap, and set over-commit ratio to 100. So the commit limit will be much larger, with actual space in case they use them, and no wasted HDD space at all.
Giving the average ZRAM compression ratio of 3:1, I should set the SWAP space limit to 3x free RAM(or less) when idle.
But I still have some questions:
1. Will my method work? Any cautions for it?
2. Does anyone know the average ratio of [used/commited] memory for common usage?
3. What's the standard/easy way to setup ZRAM and set it as the only SWAP space?
I'm still rather new to Linux, hoping someone can help.
Thank you all.

So you are rather new to linux, but you know it has a bad design flaw, and even further, you have outsmarted all the designers of this OS that is widely in use in desktop, servers and mobile? And you think you could fix it by a few simple changes yet for some reason such chances don't come in by default.

You are aware that the link you posted is from 2004? The OOM killer was indeed infamous, but are things still that bad?

I suggest we take one step back, and you'd start by describing what your actual problem is, when running with defaults. Are you seeing out-of-memory errors, are you seeing random apps killed?

Then we could work towards solving the actual problem, if that exists, that is.

So, instead of letting the OOM killer eliminate the offending process as soon as possible, you are willing to bring your whole system to its knees by forcing a memory leak into swap. After which the OOM killer will kick in anyway and kill the process. What’s the point?

The mention of HDD in a context of normal system operation suggests something even worse: misunderstanding what swap is and treating it as “free RAM”. Which it is not. At least not since when ancient dinosaurs were roaming around.

As Siwastaja asked: what’s the actual problem you are facing right now? Exactly.

Over-commit causes no problems in normal situations and allows efficient memory management. It does not cause any new uncertainity; at most shifts existing one in time — to the future. In either case that uncertainity should not be of concern. In a healthy system memory exhaustion doesn’t normally happen, except for a  runaway leak in which case killing the misbehaving process is exactly what you want.

Thank you guys for replying.
There's no problem in normal operation yet, but if someone just use everything default, and don't care what happens to their system, then he/she will just use IOS on the phone, and will not be in this forum.
Further, some OSes (e.g, Windows, Solaris) don't have this OOM-killer thing at all, and they works fine.

I have hard time deciphering that last post.

Are you saying that by surfing web with linux under default overcommit and swap settings, the system and/or important applications crash due to out-of-memory so regularly (and you are expecting to see that soon, just not yet), that they switch to another OS due to that?

Did I get that correctly?

All systems must deal with the actual memory running out in one way or another. It's always a difficult situation, and really the only good way of dealing with this is have enough memory for the task you are doing, and fix buggy programs.

It is true linux used a fairly "stupid" way of dealing with this, some 15 years ago. I haven't seen any discussion about it in maybe last 5 years. I assume they have fixed it to behave better, a long time ago. I.e., the OOM killer now has enough intelligence of having good chances to kill the application causing the problem, not some random app.

I remember Windows also behaving stupidly, such as the whole system crashing in out-of-memory situation, but that is also a long time ago and I think they pretty much fixed it as well. It may be at some point in history, the behavior was better in Windows than it was in linux, but I'm not sure. Currently, I think basic memory management works quite well in all major operating systems (Windows, linux, mac OS) because it's such fundamental part and while non-trivial, not rocket science either.


[0] Message Index

[#] Next page

There was an error while thanking
Go to full version