Products > Computers

Veeam on Linux

(1/8) > >>

PlainName:
Need to sort out Linux backups - this is a Windows house, so I am not au fait with the Linux situation other than whenever I ask about it no-one has a solution...

Veeam catches my eye. Looks reasonable (except... 3GB Ram necessary to do a bare metal restore? Blimey), but having browsed as much documentation as I can find and cope with, I can't find an answer to my incrementals question: can it cope with a changed backup set?

Specifically, suppose you run off a full and then an incremental. That's a coherent backup set and should restore without a problem. Now change the destination media and run off a full and incremental. Still coherent and restorable. But change back to the first set and run off an incremental: is that restorable?

There is a change tracker process which determines what needs to be backed up since the last backup job, but it's not clear if the information is based on the backup set or on the non-specific job. In the first case then switching media wouldn't be a problem, and in the second it surely would.

Anyone know the answer? It is important because I never backup to my last backup media (since a problem will trash not just the current backup but the last known good one as well), thus incrementals (or diffs, preferentially) must track changes compared to the set and not local history.

PKTKS:

There is not a single one serve all answer.

As in *NIX  there are literally dozens methods to backup.

They wide vary from replicating, compressed, incremental...

Historically TAR is the tool which preserves just about everything
so.. expect TAR be present several times..

Modern storage alternatives present the second relevant question
Depending on these 2 answers.. type_of_backup and storage
* SOME* solutions are far more easier than others..

Expect to do real *NIX stuff like piping, having several filters
and command options as that usually happens in unattended jobs
with proper scheduling (likely in large pools cgroups)

Paul

PlainName:
I'm not after a 'serve all' solution, just a simple and robust one for my problem: restore to bare metal. It would be nice the backup could be browsed, but that's not essential.

Veeam looks like it might do that, but it is seriously broken if you can only use a single run of intermediates. Actually, I'm surprised it won't do diffs. Does any Linux backup strategy do diffs, or is it just a Veeam thing?

PKTKS:
You have not posted enough about your type_of_backup.

Binary only?  User stuff?  FULL file system?  Server or Workstation?

Compressed type?
Requires time tags ?

Or just a vanilla shadow from some branch?

*NIX con mount and image and put tons of different stuff in  a single branch...
They can vary from different file system type to different DEVICE TYPE sub system

Without that kind of rationale chances are you will have issues choosing a wrong tool
where actually a combination of scripting and some built in tool is enough

*NIX already have almost all you need ready - as long as you can bare to use them.

For the casual use I would just suggest rsync.
More specialized stuff may require "imaging" or TAR for safe reasons
even more specialized stuff may required combining imaging with TAGs
you can control tags from several different apps

Paul

bd139:
I suggest you run a fucking mile from veeam. It’s a shit show of pain. I spent half of last year getting people to remove it from things.

My preferred solution is separating configuration and data explicitly. Keep configuration in GitHub and deploy with ansible. Data in one of the following destinations:

1. Amazon S3. Enable bucket versioning and lifecycles to push it into cheaper storage as it ages. Various things out there can push it up there depending on your requirements and data volume. Beware: it costs nothing other than storage to put it up there but getting the data back is expensive.

2. rdiff-backup either into a local volume or over SSH to another node. That does full incrementals with the “last backup” being effectively a full one and previous ones are incremental history (rather than the traditional way round of full then incrementals). The “state” is held by the destination rather than the source so you can cycle targets fine and it’ll resolve what the differences are between the source and destination state. The last successful backup is always 100% consistent directory tree as well so you can recover it with “cp” only.

Both scale to hundreds of gigabytes and millions of files. I’ve used both in production for over 5 years each on absolutely critical data.

Sounds like rdiff-backup is what you want.

Navigation

[0] Message Index

[#] Next page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod