I bought a 2TB portable drive. It looks like it is 2 drives stacked in one package. Anyone confirm this? Are they in some RAID config?
Amazon Glacier says, "Yes."
As far as spinning drives go, I've had 4 times as many external fail vs. internal. I no longer trust them at all. I personally still have two in use but they will be phased out this year. I now use standard internal drives and put them in external enclosures myself.
I bought a 2TB portable drive. It looks like it is 2 drives stacked in one package. Anyone confirm this? Are they in some RAID config?
I just had an external drive die suddenly. I am now copying it back from a backup drive. :phew:
I bought a 2TB portable drive. It looks like it is 2 drives stacked in one package. Anyone confirm this? Are they in some RAID config?
Can recommend Macrium Reflect https://www.macrium.com/ (https://www.macrium.com/) because it's reliable, quick and simple to operate. Saved my bacon several times. For image restore it can either boot from a thumbdrive or from the boot partition on your system drive, the latter is useful if (for e.g.,) your Windows partition is corrupt and you just want to quickly restore it. I keep full backups on seperate systems: a NAS, a discrete RAID disk array box, and copies spread around the other 4 PCs on the network. Extra backup drives installed in every workstation for this purpose are cheap insurance against one or other machine vapourising, but it does load the network down when the data transfer happens (in the small hours when not being used mostly). For real time protection I use SuperFlexible File Synchroniser (new version re-named: https://www.syncovery.com/ (https://www.syncovery.com/) ) which is set to protect specific directory/files daily or hourly - things like email folders and working directories for project programs.I'd second that - I have it set to do an nightly image backup of the main drive to a second drive and to a NAS.Nice thing is you can mount an image file to pull individual files out.
another utility, Versionbackup ( no longer sold), which keeps zips of changed files
NAS 2x2TB raid 0 configured, backed up weekly.
NAS 2x2TB raid 0 configured, backed up weekly.
Yes, I seem to have worse luck with external drives. Don't know why though. I don't tend to move them or subject them to shock. I can only imaging it is hotter in those external enclosures.Google does very extensive analysis of the regular hard drives they use in data centers and found temperature not to influence drive life, unless it's quite excessive.
I intend to refill the enclosure of the failed drive with a new drive.
Takes experience. You have to lose the contents of a hard disk to understand the importance of backups, you have to have a failed restore due to faulty backup to understand the importance of backing up properly! :-BROKEYes, testing your backups is as important as making them. That's not theoretical either, I've seen an example of a narrow escape just last month.
I'm using BackBlaze now for HD backupsLooks like Backblaze have really good prices. I do worry about how long a company can last when it has data storage costs 1/4 of its competitors. At that price, they would probably have to have disks last at least two years before they even have a chance of making money. Don't know how they do it especially if they need to have redundancy. As long as they keep up the prices though, they are a fabulous deal.
I've got about 200Gb of data and I'm using the following:Is your system able to deal with creeping corruption? All backups seem fairly immediate. That can be an issue when something's been encrypting your files on the down low, or corrupt memory causes random and intermittent corruption of files.
1. time machine offline on 512 gig SSD for lazy local backups
2. rdiff-backup hourly to an HP microserver with FreeBSD + ZFS mirror on two 1 TiB disks.
3. rsync that to an EBS volume on an AWS instance nightly.
4. Everything major (pictures/documents/irrecoverable stuff) is synced to iCloud as well and is on my phone handset.
I've just set the whole stack above up this week. All automated.
AWS + EBS is quite expensive but cheaper than the other options. You don't get screwed hard until you actually have to do a restore - costs money to pull 200Gb out of AWS.
rdiff-backup is “permanently incremental” I.e you can roll back to any point in time. Checksums everything. If anything is corrupted or you get cryptolocker then it will diff the corruption as a file increment so you can go backwards in time until the file is fixed. Same with time machine.I assume you have a way of preventing malware from eating up all your incrementals and spreading throughout your network? Do you have something that's disconnected from the rest of the network somehow?
Corruption is handled via multiple target media as well.
We use it for production backups for 15 years now. ZFS for about 5 years. It’s amazingly solid. Trick is buy good disks and use good operating systems.
I assume you have a way of preventing malware from eating up all your incrementals and spreading throughout your network? Do you have something that's disconnected from the rest of the network somehow?The good thing about using a ZFS or BTRFS NAS is that snapshots are instant to create and become read only. The snapshots are part of the filesystem - not files on a drive. I have had a cryptovirus start to infect a NAS box at a company I sometimes help, and in just an hour, it had managed to encrypt about 20G of files on the NAS. The snapshots from the previous day were all fine, but since then, I have disabled all SMB Windows network file sharing to the NAS boxes. Cryptoviruses do know how to find and attack network shares. They also target and attached USB drives, so a backup USB drive permanently plugged into a PC is pretty useless. It is much more valuable to attack and encrypt files on a server then a workstation.
Filesystem Size Used Avail Use% Mounted on
/dev/md0 5.4T 3.5T 1.7T 69% /volumes/raid1
/dev/md1 6.3T 5.4T 571G 91% /volumes/raid2
/dev/md3 7.2T 1.8T 5.1T 26% /volumes/raid3
I assume you have a way of preventing malware from eating up all your incrementals and spreading throughout your network? Do you have something that's disconnected from the rest of the network somehow?The good thing about using a ZFS or BTRFS NAS is that snapshots are instant to create and become read only. The snapshots are part of the filesystem - not files on a drive.
This is the problem with raid - particularly as the disks get bigger. Raid 1 (Mirroring) is the only raid I have ever chosen to use. I could be talked into using RAID 10. To me, if you want to use RAID5/6, you may as well be looking multiple RAID systems set up to run like a SAN array to give hardware redundancy. Otherwise RAID5/6 needs a good backup, and you have to be able to accept the down time while you are rebuilding the repaired or replaced RAID drive from the backup.I assume you have a way of preventing malware from eating up all your incrementals and spreading throughout your network? Do you have something that's disconnected from the rest of the network somehow?The good thing about using a ZFS or BTRFS NAS is that snapshots are instant to create and become read only. The snapshots are part of the filesystem - not files on a drive.
I lost a 16TB RAID6 to creeping corruption. In my instance I had a dicey SATA controller that gave read errors under heavy load on 2 ports, but never enough to even register. Where it did the damage was stripe read-modify-write cycles on parity where the chunk size was 1M (so 8M stripes). Over probably 7 months it slowly corrupted 2 drives on the array until it caused noticeable damage. By then I'd lost a considerable amount of (mostly replaceable) data and corrupted a whole archive of years of digital photos that were not backed up.
ZFS would probably have mitigated that, but it was strictly Solaris only at the time. BTRFS hadn't even been thought of.
Along with effective and tested backups, I now run periodic md5 checks over the array and keep an eye on the array scrub mismatch_cnt (which is how I noticed the issue in the first place).
The day ZFS hits the mainline kernel I might switch.
it's a good practice to backup the NAS ... on a simple drive but backup.I assume you have a way of preventing malware from eating up all your incrementals and spreading throughout your network? Do you have something that's disconnected from the rest of the network somehow?The good thing about using a ZFS or BTRFS NAS is that snapshots are instant to create and become read only. The snapshots are part of the filesystem - not files on a drive.
I lost a 16TB RAID6 to creeping corruption. In my instance I had a dicey SATA controller that gave read errors under heavy load on 2 ports, but never enough to even register. Where it did the damage was stripe read-modify-write cycles on parity where the chunk size was 1M (so 8M stripes). Over probably 7 months it slowly corrupted 2 drives on the array until it caused noticeable damage. By then I'd lost a considerable amount of (mostly replaceable) data and corrupted a whole archive of years of digital photos that were not backed up.
ZFS would probably have mitigated that, but it was strictly Solaris only at the time. BTRFS hadn't even been thought of.
Along with effective and tested backups, I now run periodic md5 checks over the array and keep an eye on the array scrub mismatch_cnt (which is how I noticed the issue in the first place).
The day ZFS hits the mainline kernel I might switch.
I'm using BackBlaze now for HD backupsLooks like Backblaze have really good prices. I do worry about how long a company can last when it has data storage costs 1/4 of its competitors. At that price, they would probably have to have disks last at least two years before they even have a chance of making money. Don't know how they do it especially if they need to have redundancy. As long as they keep up the prices though, they are a fabulous deal.
This is the problem with raid - particularly as the disks get bigger. Raid 1 (Mirroring) is the only raid I have ever chosen to use.
BTRFS is probably not as RAM hungry as ZFS and is pretty solid.
I would suspect they are counting on everyone backing up the same OS files, apps, porn, commercial music and movies etc and only having to store one copy of them.I'm using BackBlaze now for HD backupsLooks like Backblaze have really good prices. I do worry about how long a company can last when it has data storage costs 1/4 of its competitors. At that price, they would probably have to have disks last at least two years before they even have a chance of making money. Don't know how they do it especially if they need to have redundancy. As long as they keep up the prices though, they are a fabulous deal.
Apologies , I always get the RAID 0 and 1 numbers the wrong way around, it is 1. :palm:NAS 2x2TB raid 0 configured, backed up weekly.
Whhhhyyyyyy.
Yes, testing your backups is as important as making them. That's not theoretical either, I've seen an example of a narrow escape just last month.
Apologies , I always get the RAID 0 and 1 numbers the wrong way around, it is 1. :palm:NAS 2x2TB raid 0 configured, backed up weekly.
Whhhhyyyyyy.
Apologies , I always get the RAID 0 and 1 numbers the wrong way around, it is 1. :palm:NAS 2x2TB raid 0 configured, backed up weekly.
Whhhhyyyyyy.
It's easy enough to remember - AID 0 is not RAID, and describes exactly how much help you'll get when (not if) it fails. :)
Yes, testing your backups is as important as making them. That's not theoretical either, I've seen an example of a narrow escape just last month.
Could not agree more, backing up and not testing that
A. they can be restored
B. What is restored isn't digital gibberish
is a complete waste of time.
I've seen it a few times where corporates haven't had good backups for, in one case, years.
I worked for a company once. Well I say worked for, but accidentally landed chief technical monkey position because I needed to eat. Turned out they had been blindly cycling tapes for about 2 years. When I reviewed the steaming turd I was landed with I noticed that the external VS160 DLT drive wasn't even connected to the server. The SCSI cable was down the back of the rack. I think someone had moved stuff around and left it like that. Still the tapes ejected and got inserted so they had the illusion of a backup
They actually didn’t notice they weren’t running and didn’t check for errors. Someone just went into the cupboard in the office; took the tape out and put another one in once a day. What happened between two consecutive events wasn’t their problem.Related to this, I once suffered a major data loss while working at a large company (close to 100k employees at the time), where we used Unix machines at our desktops (HPUX based) and all file storage was, of course, over the network on servers. One day a filesystem got corrupted, affecting the home directory of myself and about 40 colleagues. The RAID didn't help since it was not a disk failure but filesystem issue, so they had to go to the tapes to restore. You can see where this is going. Well it turns out the daily and weekly incremental backups weren't working correctly (nothing was copied) so they needed to fall back to the most recent complete backup... from about 7 months earlier. Luckily the code repository is separate, but the affected users lost 7 months worth of data on their home directories including e-mails, documentation for active projects, personal research notes, unsubmitted code, etc. Though not quite as bad as simply losing everything, getting "rolled back" by several months is painful. The subsequent investigation revealed a great many other filesystems also running completely without any recent backup.
They actually didn’t notice they weren’t running and didn’t check for errors. Someone just went into the cupboard in the office; took the tape out and put another one in once a day. What happened between two consecutive events wasn’t their problem.That's what you get when you treat or pay people like drones. You get people that do their job and their job exactly.
I worked for a company once. Well I say worked for, but accidentally landed chief technical monkey position because I needed to eat. Turned out they had been blindly cycling tapes for about 2 years. When I reviewed the steaming turd I was landed with I noticed that the external VS160 DLT drive wasn't even connected to the server. The SCSI cable was down the back of the rack. I think someone had moved stuff around and left it like that. Still the tapes ejected and got inserted so they had the illusion of a backupI've seen something similar. Someone must have gotten fed up with all the errors the backup software threw and checked or unchecked the right boxes to make them go away. The problem was that the backup jobs were reported to have been run completely, but no where it was mentioned that half the data was glossed over due to errors, which were no longer visible due to the checkboxes. So you get a "job successfully completed" message in the end, as all the required steps had been kicked off, and you could blissfully drink your morning coffee.
Sometimes drones is all you can get.Then you'd better make sure you train them properly, or give them the right checklists. If they all dance their appointed monkey dance, it's no problem, but someone needs to oversee the bigger picture.
Sometimes drones is all you can get.Then you'd better make sure you train them properly, or give them the right checklists. If they all dance their appointed monkey dance, it's no problem, but someone needs to oversee the bigger picture.
That's what you get when you treat or pay people like drones.
He wasn't treated or paid like a drone, so that seems to work out. Though I never claimed it fits your situation, or every situation even. Only the laws of nature get to do that, and we're not even sure of it being so.QuoteThat's what you get when you treat or pay people like drones.
Nice soundbite, but I've had this when the 'drone' has been the owner of the business (and rich to boot). He wasn't dumb and wanted it to work, so I don't think this example (just one of many) fits your bon mot.
He wasn't treated or paid like a drone, so that seems to work out.
Hardly! Still have the religiously changed but still blank tapes, which I'd class as not working out :)I meant the statement itself was working out, as it didn't apply to the boss. It only applies to drones, which this boss isn't. Though I don't think finding an exception changes much. I'm not trying to write the laws of physics.
No, my point wasn't that this was an exception to the rule, but that perhaps the rule is arse about face. That is, the supposition is that treating someone like a drone gets you a bad service which, on the face of it, seems reasonable. But the ultimate drone has to be a computer, and that works out just fine (except when it doesn't). Typically, it fails to work because you've neglected to cover some situation in its programming, and maybe that's the problem with the drone thing. Make something explicitly part of the job and its covered.
So maybe the fix is to treat your drones as drones, being explicit in what they should do, and realising that if you haven't 'programmed' a situation it won't be covered. A failure is then a programming issue (i.e. you've failed the think things through enough to let your drone know how to react to a bad situation). If you tell 'em to change a tape every day and they do that but haven't checked to see if the backup has been done, that isn't their fault but yours for not making that check part of the job.
Which is not to say or imply that a 'drone' her is a brainless moron.
Then you'd better make sure you train them properly, or give them the right checklists. If they all dance their appointed monkey dance, it's no problem, but someone needs to oversee the bigger picture.
I'm sure I addressed the rest of your comment a few posts back.
I just had an external drive die suddenly. I am now copying it back from a backup drive. :phew:Do I have all my backups up to date ?
How often do you test your backups?I don' t test the individual PC backups as there is no need. Nothing of great importance could be lost due to other backups providing backups :o
I don' t test the individual PC backups as there is no need. Nothing of great importance could be lost due to other backups providing backups :oHaving many copies of the original copy doesn't protect you from a lot of failure modes. You really do need to test whether you can recover relevant files from your backups on a regular basis. If you don't test, you don't know what you have. Your stacks of copies could be full of perfect files full of perfect garbage.
The NAS unit I am perfectly happy with as far as ANY (relatively) large raid 5 system can go. The raid is scrubbed every week. Recovery of a single drive failure large raid 5 array is always finger biting no matter what the machine used. The main raid is backed up at a folder / file level - NOT as an image. The PC's data is backed up to the NAS at a folder / file level (which is why the user directories are on a separate drive). The only 'sector image backups' are the individual PC's to the GFS SD units - and again - I can safely recover that from other areas. The GFS system always means there are 3 independent image copies with little date difference. (And again, this only applies to the PC's / Laptops) .
All the data is therefore stored on a file by file basis with multiple backups. There is simply no need to test. It is utterly pointless. Image backups are a different animal which is why I limit that to PC's -(and VM's actually).
All external drives are checked occasionally for any error using manufacturer's applications (non destructive).
There is simply no need to 'test' anything. Relying on RAID - even with drive failure ' fault tolerance' is very very bad karma. Raid is not failsafe. It MUST be backed up. There is a very real chance of raid not recovering effectively and this is directly related to the amount of data our drives can store. I am 99.9999999% happy with the reliability and robustness of my systems. Delirious in fact. :). If it was a commercial enterprise I would 'consider' use a real time off site link storage / backup, however, it isn't and I dont. Off site storage 'by hand' is usually quite sufficient even for a moderately sized commercial enterprise -- depending of course on the format of the data and number of copies stored - a GFS method should be used in that case (grandfather father son).
I have a Synology appliance set for Raid 1 that I backup to. I use Macrium Reflect for backups. I switched from Acronis (the business one, not the personal one) and dropped it because it was an absolute piece of junk that gave me headaches all the time. Macrium has been headache free.I believe Synology units are very well respected. ! . Do they not have some form of real-time 'sync' application ? - that may be useful. I only have experience of the QNAP Pro units for home use. The last commercial NAS units I used were Compaq Proliant external Raid units linked via Compaq failover- (serial port heartbeat link) - (and this wasn't really a true NAS as it used Windows NT4 :palm: server to link to the network. NT was an excellent system with odd numbered service packs providing you wore adult pampers.... that dates it ... :P ///
I also have a SATA dock and I occasionally take an image on a drive and keep it as an offsite backup.
I use to have a backup plan that did a full backup every 2 weeks, differentials every 2 days, and incrementals every couple of hours. Now I just do a full backup every week or so. At some point, I'll kick off a differential every couple of days but haven't gotten around to it on my new system yet.
Having many copies of the original copy doesn't protect you from a lot of failure modes. You really do need to test whether you can recover relevant files from your backups on a regular basis. If you don't test, you don't know what you have. Your stacks of copies could be full of perfect files full of perfect garbage.Not at all :).
Not at all :).What happens when you RAM goes corrupt and slowly corrupts data over time, which shows itself after a while when the errors built up to critical mass? All the garbage is copied perfectly down the line, again and again, and your last clean backups will be months or even years ago. Not testing anything is setting yourself up for failure. You're not the first and won't be the last. In the end, there has to be a monkey checking to see if the recovery process output lines up with the input.
It is not the gun, it's the gunner...
Static files are subject to hashing when written, as they are static by nature they will be flagged by hash corruption. A total non issue.
Dynamic files are also subject to hashing but of no use as a direct integrity check. The parent application should maintain this integrity check. - self corruption is therefore a non issue providing multiple date/state copies exist. A total non issue.
Dynamic files and the issue of 'user' corruption / invalid or incorrect data entry or deletion or 'insert and other 'PBCAK' here' is managed by standard dynamic file methodology and also a far FAR longer TBO regime - (time before overwrite - if you are unfamiliar)
Dynamic files that are not self managed for data integrity are also subject to extended TBO. Depending on needs and pockets and regulations TBO can be up of 5 years or more (7 in a lot of cases in the UK)
Again, with my system and my regime I am 99.999999% happy and secure. It is a total NON issue. Simply unnecessary. :-+ .
What happens when you RAM goes corrupt and slowly corrupts data over time, which shows itself after a while when the errors built up to critical mass? All the garbage is copied perfectly down the line, again and again, and your last clean backups will be months or even years ago. Not testing anything is setting yourself up for failure. You're not the first and won't be the last. In the end, there has to be a monkey checking to see if the recovery process output lines up with the input.What happens ? - if you follow the perfectly standard and normal advice I have list above - nothing at all is what happens ;) (ps - love the :box: bit .. :) :-+)
Besides, anyone not nervous about his backups is complacent and will fall eventually :box:
Ahh, the NHS, they pay *really* well when they lose critical patient data because their backups failed to restore sensible data.100% agree ;D . Since they closed the specialist NHS Information Authority (NHSIA) of which I was a part of, and farmed it out to muppets to 'save a few quid' they have had HUGE issues, many of which are never reported as patients would run from the hospitals..... :scared:
I think I went to Cornwall for two weeks in 5 star on one of those or was that the private cosmetic surgery place, I forget...
That's funny. I have some family members who work close to NHS IT. It's a shitfest and a half. Glad I work in private sector finance.
They need to bring it all in house and run it like they ran Spine 2.
What happens when you RAM goes corrupt and slowly corrupts data over time, which shows itself after a while when the errors built up to critical mass? All the garbage is copied perfectly down the line, again and again, and your last clean backups will be months or even years ago. Not testing anything is setting yourself up for failure. You're not the first and won't be the last. In the end, there has to be a monkey checking to see if the recovery process output lines up with the input.
Besides, anyone not nervous about his backups is complacent and will fall eventually :box: