I do not backup system files, because the configuration files includes the list of packages currently installed.
Presumably that applies only to things installed via the repo's package manager?
Yes.
What about Windows apps installed into Wine?
Those are under user home directory,
$HOME/.wine/.
Or user-installed stuff (say, esp-idf)? The package manager wouldn't know about those, so they would have to be reinstalled manually, wouldn't they?
Yes. I do not install such things (say, Arduino and Teensyduino, me being a Teensy user) in the system, but keep them under my own user home directory. Their binaries are launched by symlinks or scripts in
$HOME/bin/, which is the first directory in my PATH. Same for stuff I pull from git. In that sense, they are "user files", just like stuff installed under Wine.
I fixed that with a bit of customisation, and it was fine. Until it wasn't. Turns out that some update to the OS (not Nextcloud - I am too clever to let that update without notice) vaped or changed something so that the cron update no longer worked. Indeed, the thing no longer exists.
Hell, that should not happen. System updates should not overwrite customized configuration files without asking, and package managers should complain loudly if trying to remove a package whose configuration directory has user-customized files.
The likeliest explanation is accidentally uninstalling anacron (and losing /etc/anacrontab), but otherwise it sounds like a serious bug.
On servers with nginx or Apache, I do use a completely custom configuration directory for the service, instead of the /etc/apache2/ or /etc/nginx/ default in Debian derivatives. This is to ensure that system updates do not affect the service configuration at all. It is an exception, though.
These sort of things happen and, as you say, periodically one has to figure stuff out and take the opportunity to update (or block from updating!), etc.
In my experience, very rarely; but out workflows and the tools we use are completely different, which I think explains the difference.
(Me using "disposable" virtual machines to experiment, is one way I keep my main working environment "clean", and reduces the likelihood of such events significantly.)
So that's where I am coming from. I want to know I can recover as quickly and perfectly as possible from an actual disaster, and then as a lower priority I can think of icing like how to undelete a fat finger moment, or compare this version to last weeks, etc.
It sounds like
LVM snapshots would work best for you, with both / (root) and /home (home) on the same partition.
An LVM snapshot is an instant snapshot of an LVM volume. It can be mounted as a separate device, without affecting the volume it is a snapshot of, as long as the LVM Volume Group has sufficient unused disk space for it. (It is copy-on-write.) You can have as many of them as you like, you can mount them and explore and compare them, and so on.
You can revert the volume to any previous snapshot with just a single command.
Using LVM2 snapshots can be mounted read-write can be made read-write, so you can make a snapshot, then restore the contents of that snapshot from offline storage, and then revert to that.
Unfortunately, I don't know of any GUI backup tools that do that.
It also only works if one tells the installer to use LVM at system installation time, and gives the LVM volume groups much more disk space than the actual partitions (so that there is always disk space for the snapshots available).
However, based on your description, I suspect
Timeshift and
Back in Time might work for you. If you have a NAS running Linux you store the backups to, you might wish to try out
Bacula.
(I do not use them myself, and have not installed them. I've only seen them mentioned and read their descriptions.)