I have some doubts
I am trying to compile a relatively big C++ project on a machine that only has 256Mbyte of ram (don't question why, it's a crazy idea, I know). So I have two swap partitions
# swapon -s
Filename Type Size Used Priority
/dev/hda1 partition 249944 7648 -1
/dev/hdc1 partition 7543084 0 -2
250Mbyte on a device, and 7.5Gbyte on a second device.
So, 250+7.5G isn't more than enough if g++ needs 1Gbyte of ram to compile stuff?
Well ...
cc1plus:: internal compiler error: Segmentation fault
g++ didn't think so, and it crashed complaining it has not enough ram :wtf:
I unmounted the two swap partitions, and remounted them in reverse order
# swapon -s
Filename Type Size Used Priority
/dev/hdc1 partition 7543084 0 -1
/dev/hda1 partition 249944 7648 -2
And this time it worked :D
yes, but ..... why? :-//
/dev/hdc1 partition 7543084 1155648 -1
I monitored "swap -s | grep hdc1" and noticed this during g++ execution.
The max usage of stack was ~1Gbyte
Without debugging the whole situation
What should I have to "debug" ?
Kernel? It's an old 2.6.39 (I can't update for several reasons), but on this platform has an uptime of 3 years.
Gcc? it's v4.3.4, it has proven to work with more than 1994 packages compiled
G++? it's v4.3.4, it has proven to work with this large project in the second configuration above shown (and it's able for example to recompile cmake)
Does it work with a valid configuration? -2 is not a legal value for swap priority.(1) The minimum is -1.
That's interesting since I simply did
swapoff /dev/hda1
swapoff /dev/hdc1
swapon /dev/hdc1
swapon /dev/hda1
I have no idea from where "-2" comes from, but I am surprised the kernel was not using the second swap partition and triggered out "not enough memory", which caused g++ to crash
Another option is that, since gcc is executed on a system that is no longer in the normal mode of operation, some internal bug depending on a race condition within gcc is triggered.
What do you mean with "no longer in the normal mode of operation"?
What should I have to "debug" ?
Kernel? It's an old 2.6.39 (I can't update for several reasons), but on this platform has an uptime of 3 years.
Gcc? it's v4.3.4, it has proven to work with more than 1994 packages compiled
G++? it's v4.3.4, it has proven to work with this large project in the second configuration above shown (and it's able for example to recompile cmake)
Unfortunately all of the above, which makes it unfeasible. Unless you are extremely bored and have A LOT of free time. ;)
That's interesting since I simply did
swapoff /dev/hda1
swapoff /dev/hdc1
swapon /dev/hdc1
swapon /dev/hda1
Therefore probably system assigned it and it’s not a problem (as stated in the note).
What do you mean with "no longer in the normal mode of operation"?
The system is already operating in out-of-memory condition. Abusing swap to pretend there is more memory is not changing that.
I may imagine one more option: what are the devices on which the swapfiles are located on? Aren’t they by any chance some small flash media like an SD card? Perhaps they are damaged and gcc received a mangled page. Such media are not suitable for heavy I/O.
echo 2 > /proc/sys/vm/overcommit_memory
# swapff /dev/hdc1
# ./dymem
trying to allocate 1.0 of memory .. success
trying to allocate 3.0 of memory .. success
trying to allocate 7.0 of memory .. success
trying to allocate 15.0 of memory .. success
trying to allocate 31.0 of memory .. success
trying to allocate 63.0 of memory .. success
trying to allocate 127.0 of memory .. success
trying to allocate 255.0 of memory .. success
trying to allocate 511.0 of memory .. success
trying to allocate 1023.0 of memory .. success
trying to allocate 1.1023k of memory .. success
trying to allocate 3.1023k of memory .. success
trying to allocate 7.1023k of memory .. success
trying to allocate 15.1023k of memory .. success
trying to allocate 31.1023k of memory .. success
trying to allocate 63.1023k of memory .. success
trying to allocate 127.1023k of memory .. success
trying to allocate 255.1023k of memory .. success
trying to allocate 511.1023k of memory .. success
trying to allocate 1023.1023k of memory .. success
trying to allocate 1.1048575M of memory .. success
trying to allocate 3.1048575M of memory .. success
trying to allocate 7.1048575M of memory .. success
trying to allocate 15.1048575M of memory .. failure
# swapon /dev/hdc1
# ./dymem
trying to allocate 1.0 of memory .. success
trying to allocate 3.0 of memory .. success
trying to allocate 7.0 of memory .. success
trying to allocate 15.0 of memory .. success
trying to allocate 31.0 of memory .. success
trying to allocate 63.0 of memory .. success
trying to allocate 127.0 of memory .. success
trying to allocate 255.0 of memory .. success
trying to allocate 511.0 of memory .. success
trying to allocate 1023.0 of memory .. success
trying to allocate 1.1023k of memory .. success
trying to allocate 3.1023k of memory .. success
trying to allocate 7.1023k of memory .. success
trying to allocate 15.1023k of memory .. success
trying to allocate 31.1023k of memory .. success
trying to allocate 63.1023k of memory .. success
trying to allocate 127.1023k of memory .. success
trying to allocate 255.1023k of memory .. success
trying to allocate 511.1023k of memory .. success
trying to allocate 1023.1023k of memory .. success
trying to allocate 1.1048575M of memory .. success
trying to allocate 3.1048575M of memory .. success
trying to allocate 7.1048575M of memory .. success
trying to allocate 15.1048575M of memory .. success
trying to allocate 31.1048575M of memory .. success
trying to allocate 63.1048575M of memory .. success
trying to allocate 127.1048575M of memory .. success
trying to allocate 255.1048575M of memory .. success
trying to allocate 511.1048575M of memory .. success
trying to allocate 1023.1048575M of memory .. success
trying to allocate 1.1073741823G of memory .. failure
Mem: 61932k total, 57060k used, 8564k free, 14084k buffers
What't the purpose of swap? isn't to give the system more memory than physically available?.
In the 80s and, if someone used Windows from 9x line, in the 90s. ;)
Swap space acts as the backing store for pages, that do not have it “naturally”.
Modern operating systems decide which pages should be in RAM to maximize performance. A large portion of them is rarely accessed or not used at all. Keeping them in RAM is a waste of that precious resource. So the OS kicks such pages out of RAM(1) and uses that space to put something more useful there. But here comes the problem: while e.g. .text sections and resources embedded in a binary are only copies of data stored in some persistent memory (HDD, SSD, …) and are duplicated in RAM for speed, anonymous pages are not. Their content can’t be simply erased. Providing swap lets the system create copies elsewhere, if needed, and make anonymous page the same class of citizens as other pages. If they have a copy, they can also be removed.
That does not neccesserily happens if anonymous pages use so much RAM that they “overflow into swap”. No, in a perfectly healthy system, that has only a small portion of its RAM used for program data, you will still see swap usage growing. That’s because there are other things that go into RAM and usually they are much more useful than the initialization code of a process one has started 2 weeks earlier. As an example, free -h from me:
total used free shared buff/cache available
Mem: 7.8Gi 1.6Gi 152Mi 124Mi 6.0Gi 5.8Gi
Swap: 3.8Gi 214Mi 3.6Gi
Swap is more important for a system with little RAM usage than if RAM is exhausted.
If programs start actively using more memory than there is RAM(2), the system is no longer able to work normally. It will remain stable, but performance may drop by few orders of magnitude, latency and latency variance grow considerably, some operations start to time out and — in the worst case — the many processes may become completely unresponsive for a long time. If it’s one time situation on your personal computer or the nature of the workload allows that(3), you may try waiting through the problems. But if that’s going to be a repetitive task or a situation that occurs outside personal context, be aware that you are outside of the specs.
There is a bit mre technical discussion of the topic in Chris Down’s “In defence of swap” (https://chrisdown.name/2018/01/02/in-defence-of-swap.html). The article is primarily his input into the discussion on whether swap should even be used nowadays, but it also contains some technical background.
____
(1) Details are a bit more complex.
(2) In reality even less than the whole RAM.
(3) For example a few times I was stitching a large panorama: the locality of reference was very high, so no problems were observed.