Products > Programming
Mystery process adding "options rotate" to resolv.conf file
peter-h:
Thanks 5U4GB. I've passed that back.
Fundamentally it must be difficult to make image backups of running systems, especially if the OS uses temporary files to carry interprocess messages. In Windows, whenever I make a Trueimage backup I reboot the PC afterwards, but that is a different issue to restoring such a backup. I currently have a problem elsewhere where a TI backup is not restoring to a running system so I will re-do that with a boot TI CD, to make sure the machine is not running.
FWIW, the work which was being done on this machine prior to the image restore was setting up listing privileges on a particular directory, which I wanted to be invisible in a dir listing which currently is visible on a specific URL. That dir contains just one file which I wanted to keep hidden unless someone had the full path to it. I solved this easily by making it an aes256 encrypted zip. Normally this is done with htaccess but IIRC nginx uses a different system.
That conf file needs to only contain 8.8.8.8 so that's been done and it has been made R/O. That seems to have solved it. Possibly some error log will appear in due course showing what may be trying to write to it :)
5U4GB:
As an example of sustemd's messing with things, from some old notes:
--- Quote ---As of 18.04 systemd overrides DNS with systemd-resolved.service which tries to plaster over the use of the standard resolver but often just breaks it leading to DNS lookup failures, e.g. on apt-get update, with a typical error message 'unable to resolve host XXXX: Name or service not known'.
--- End quote ---
followed by something like a full A4 page of all the steps necessary to get it to stop breaking resolv.conf. One thing that comes up from those is to check whether resolv.conf is a symlink, in this case to ../run/resolvconf/resolv.conf, which means the problem is coming in from there.
See also this link for another possible cause although that depends on use of udhcpc, including the malware-like behaviour where even the nuclear option of using chattr +i was bypassed.
Nominal Animal:
--- Quote from: DiTBho on January 12, 2025, 09:45:23 pm ---I have observed something similar in the past, in two occasions
--- End quote ---
Sure –– but those kinds of bugs can really produce all sorts of weird effects and problems! ^-^
That is, the symptoms match only because of random chance, not because they correlated somehow.
--- Quote from: Siwastaja on January 13, 2025, 07:53:32 am ---I had this happen to me in Windows (XP, NTFS) over a decade ago: at some point an mp3 music file was corrupted such that it contained random excerpts of other mp3 files.
--- End quote ---
It really does happen because of the different file system model in the Windows kernel. In the MS-DOS era, on FAT12/16/32 volumes, this was even more common. In both cases, it boils down to the file system model considering each allocated sector to be part of a file; whereas Unix/POSIX/Linux inode model combines file data and metadata (but not the name) into a logical object, that occupies a set of sectors.
In the inode model, when a file is extended the kernel grabs some unused sectors, and writes the new or modified data to them, and finally the inode is updated to reflect the modified size and which sectors are allocated to this file. (Some filesystems also maintain a separate list or bitmap of unused sectors; this is usually updated last. POSIX fdatasync() returns when the data written to the file has been sent to the storage device, and fsync() returns when both the data and the inode updates have been sent to the storage device. In essence, this is integral to the inode model, and not just an implementation detail.
The end result is that if the kernel crashes or power is lost in the middle of a filesystem metadata update, NTFS and FAT will generate unnamed files from the sectors that were allocated and saved but metadata not yet reflected in the file entry (MFT), whereas for inode-based filesystems such data is simply lost: the lost data was essentially written to unallocated sectors.
I am not a filesystem specialist, and do not know exactly how much journaling helps (quantitatively, I mean). Anecdotally/qualitatively, it seems to significantly reduce the time window for the above to happen, making it much rarer, but cannot completely eliminate it.
Now, the temptation would be to argue whether it is better to make the data recoverable instead of just discarding it. Fact is, most filesystem research in the last fifty years has used the inode model as a basis, because it has proven more efficient and more robust; the time windows during which crashes lose data are much shorter than they are in simplistic filesystems like NTFS and FAT/exFAT. (I'm not referring to Linux-originated ones like ext2/3/4 and btrfs, but XFS and ZFS, and research papers on file systems at ACM, and presentations at USENIX.) To simplify, where FAT and NTFS were designed essentially for single-user workstation use, inode-model filesystems were designed to work for large multi-user concurrent systems where robustness is a key requirement.
All this also means that writing or regularly appending or modifying important data files in a reliable manner differs between Windows and POSIXy OSes. :(
Similar to how serial port/terminal-like devices are handled completely differently (POSIX uses termios), these differences make writing portable applications, services and libraries less desirable, because portability requires compromises affecting efficiency and robustness. (In particular, I've found all cross-platform serial libraries' POSIX implementations pretty much utter crap. For data files, most developers don't even check their low-level calls for errors anyways ("I'll add them later", or "if that error happens, the user has bigger issues to worry about", and such inanities), so they mostly just throw data at the storage and hope it sticks, on all OSes.)
My solution to this is to ignore Windows compatibility, and limit myself to POSIXy systems. It annoys me that others see this as dissing Microsoft, while it really is just about not willing to do the compromises required for compatibility between the completely different approaches. I'm not even saying the Windows approach is any worse, just incompatible with what I like to use unless compromises that I do not want are made! (Although, I am saying NTFS is simplistic and not very good compared to ext4-on-LVM, XFS, or ZFS, for the basic reasons outlined above.)
Apologies for the rant-y post. :-[
peter-h:
It was /etc/dhcp/dhclient-exit-hooks doing that .conf file "editing".
We think it started when I restored a backup, but it is not clear why resolv.conf was empty after that. And that silly script just kept adding "options rotate".
Postal2:
--- Quote from: Nominal Animal on January 13, 2025, 09:25:06 am ---... I am not a filesystem specialist, and do not know exactly how ....
--- End quote ---
I don't understand either, but I believe that the system will protect its files from failures, at least for Windows it is so. Then it turns out that the system does not consider the file resolv.conf to be its own, the owner of the file is someone else. And we come to the human factor again.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version