Products > Computers

Ubuntu Timeshift and SSD Wear

<< < (4/4)

nctnico:

--- Quote from: Halcyon on November 08, 2019, 10:47:51 pm ---
--- Quote from: nctnico on November 04, 2019, 12:20:53 am ---
--- Quote from: Halcyon on November 04, 2019, 12:11:41 am ---
--- Quote from: German_EE on November 03, 2019, 08:40:54 am ---The system I repaired last night had 220 Gb of files in the Timeshift directory, this was on a 250 Gb Samsung drive that had died after six months and was now read only. Analysis of the SMART data showed that about 20 Gb a day was written to the drive and about the same amount was deleted.

--- End quote ---
Sounds like a case of premature failure which is covered by warranty. I have Intel SSDs which have been in constant service for 5+ years with no issues.

--- End quote ---
You have to be careful with this kind of anecdotal evidence. SSDs which are powered on 24/7 can use the wear levelling / error detection to prevent data loss. This is a continuous process which runs in the background for as long as the SSD is on. So an SSD may seem fine but it may lose it's contents within a couple of weeks when powered down due to wear on the flash cells. An SSD is a completely different beast compared to a hard drive. For this reason I always make backups on hard drives.

--- End quote ---

It's not exactly "anecdotal" evidence. SSDs are more resilient than spinning hard disks provided you use them "normally" and the disk is working correctly. What the OP described is not normal and indicates a fault with the disk. I even use normal consumer SSDs in servers without any issues. Wear leveling doesn't just continually shuffle data around (that would defeat the entire purpose of it), it only applies when data is written to the disk to ensure each cell gets used evenly and makes use of the spare area when a cell exceeds its rated write/erase cycle. Theoretically, if you write data to a cell but don't erase/rewrite it, it would last forever (provided you keep the disk periodically powered up).

--- End quote ---
No. The ultra high density nand flash cells leak and using multi-levels makes things worse. In other words: you'll need to refresh a nand flash cell every now and then. More wear means having less time between refresh cycles because the amount of leakage increases. There is a good reason nand flashes come with a lot of extra error correction bits and the controllers use sophisticated error detection and correction algorithms.

In my experience a hard drive is very reliable for as long as you keep it cooled properly (which isn't a given in most standard cases) while it is powered. Getting 10+ years 24/7 out of a hard drive is not uncommon for me.

Halcyon:

--- Quote from: nctnico on November 09, 2019, 12:07:29 am ---In my experience a hard drive is very reliable for as long as you keep it cooled properly (which isn't a given in most standard cases) while it is powered. Getting 10+ years 24/7 out of a hard drive is not uncommon for me.

--- End quote ---

10 years is about approaching the maximum "comfortable" limit of mechanical hard disks, particularly in a corporate environment. In my experience, most drives are quite happy at operating at elevated temperatures and don't really impact on their overall lifespan. What you want to avoid with hard disks are wide temperature variations and constant power up/down cycles, those will cause failure far sooner. Of course, that being said, this is my "corporate systems admin" side speaking. In a home environment, yes absolutely drives will last much longer, but your reliability may vary. I have hard disks sitting on my shelf that are over 30 years old and still work just fine, would I rely on them for important tasks? Absolutely not.

There are a bunch of papers written by companies like Google and various data centres around the world which talk about hard disk failure trends. It's quite interesting reading.

Mr. Scram:

--- Quote from: Halcyon on November 08, 2019, 10:47:51 pm ---
It's not exactly "anecdotal" evidence. SSDs are more resilient than spinning hard disks provided you use them "normally" and the disk is working correctly. What the OP described is not normal and indicates a fault with the disk. I even use normal consumer SSDs in servers without any issues. Wear leveling doesn't just continually shuffle data around (that would defeat the entire purpose of it), it only applies when data is written to the disk to ensure each cell gets used evenly and makes use of the spare area when a cell exceeds its rated write/erase cycle. Theoretically, if you write data to a cell but don't erase/rewrite it, it would last forever (provided you keep the disk periodically powered up). Reading data from SSDs doesn't contribute to wear, writing does, but even then, that cell would need to be written to thousands and thousands of times.

If you used an SSD in a very write intensive system, yes, I would expect it to wear out. But if you're using it in a system that does more reading than writing, it should last many, many years (if not, more than your lifetime). You can even use SSD drives in NAS/SAN devices for this reason.

I'm currently using 8+ year old Hitachi 2TB spinning disks in my NAS and they are approaching the end of their life. While one has developed a bad sector, my main concern is that one day I'll power down the machine (for maintenance etc...) and one or more of the disks simply won't spin up anymore.

--- End quote ---
At least some drives do shuffle data around. Samsung 840 drives had an issue with data retention and the fix is firmware gaat shuffles data around. More drives likely use the same trick. Write amplification also leads to effective data shuffling.

Navigation

[0] Message Index

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod