Electronics > Projects, Designs, and Technical Stuff
Researching practical HDD reliability/solutions...
rdl:
--- Quote from: texaspyro on February 26, 2019, 12:17:05 am ---The issue of spinning down drivers to increase longevity is full of controversy. Spinning up/down a drive tends to be a major point of stress and failure for drives. I try and leave my spinning rust spinning all the time.
I am familiar with a video editing / archive system where one could configure the idle drives to spin down or to keep the drives spinning at all times. One video farm had around half the (1000+) drives configured each way. The only drive failures were in the ones configured to spin down.
--- End quote ---
It depends. If the spin down time is too short and the drives are used frequently, then it probably could lead to early failure. If the drives are not used that much and set to spin down in 30 minutes or more then they could last for years. Many of my drives are used primarily as archives and it's entirely possible that some may spin up only once a week, others only a few times a day. The ones that see the most use are set to spin down in 1 hour and that may happen only overnight. I can't say that I've noticed any difference in longevity, but I have only a few drives.
MyHeadHz:
(TL;DR: It worked for recovering data off a failed drive, and (IMO) will likely greatly increase HDD longevity in general.)
No L-brackets were practical for this application (for a few reasons). I bought a few basic drive expansion trays to try, but none of them were practical for this application. Ultimately, I took the modular HDD bracket out of an old computer case. This particular bracket allowed me to drill holes and mount the drives to the brick, AND (crucially) let me mount/unmount drives in the bay while the mount is still attached to the brick. The brick is a standard landscaping brick, and weighs just over 20lb (~9kg) by itself. The research papers suggest heavier bricks, but those are a lot less practical. I made a guess that 20lb would be enough into the diminishing returns area of the spectrum that it would be fine, especially considering that the heaviest standalone commercial enclosure I saw (a 5-bay NAS) was only about 7 lbs total with drives installed.
FYI, I added the old IDE drive in the top slot just for dampening mass- this should reduce the amplitude of vibrations from the top bracket.
Though my goal is to increase drive longevity, data recovery seems like a great way to test the setup. One of my recently-failed external drives, with a rubber-mounted HDD, had irreplaceable data on it. That drive would sometimes mount for a few seconds, but would quickly fail and disappear. This happened independent of factors such as OS. The research papers claim that vibration eventually leads to off-track read/write errors due to wear and tear of the drive (they lose the ability to properly locate the r/w head). With that in mind, I hypothesized that solidly adding that mass to that failing drive would allow it to function. A failure would not necessarily mean anything, though.
I mounted that drive as shown above and it has performed flawlessly after about 100GB transferred. All the irreplaceable data was recoverable (yay!). I will continue to transfer the rest of the drive contents to see how it holds up, which will take about 2-3 days). If that works well, I will try to find some "torture tests" to run on the drive to see how well it handles it. If anyone has any suggests, please let me know.
Adding the mass seems to help a lot, and would likely add a great deal (logarithmic increase?) to the lifespan to any HDD. It could be a cheap and accessible tool for data recovery of failed drives (of the most common failure mode)- instead of riskier and more intensive head/platter swaps.
It is also cheap- only a few dollars. I used a standard drill (not a hammer drill) with a 5/32x4-1/2 carbide masonry bit and 3/16 x 1/4 masonry screws (both Tapcon "red").
I have a few more failed and failing drives- including SMART failures, and drives that just won't mount. With one SMART failed drive (which was in a dual-bay enclosure), I plan to mount it to the brick, then do a full scan of the drive surface. I suspect that the drive will not add any more remapped sectors, and that many of the previous remapped sectors will be returned to normal status. That would strongly support the idea that the vibration from the dual-bay enclosure was the cause of the SMART errors, and the drive itself is actually fine. If anyone is interested, I can post those results once I test them.
PS: I am aware of the tape mod, but decided not to do it to reduce handling the drive.
magic:
Harsh and nasty concrete block on a wooden shelf... :scared:
Anyway, that's a great experiment and a damn impressive result.
After you are done with recovery, it would be nice to try it again in the same setup with and without the brick a few times to see if it really is the brick causing it. Not some random fluke, or the particular USB adapter, PSU, whatever.
And by the way, you could reduce the time to half a day by connecting it to a native SATA port. On my machine, I have a connected SATA cable and SATA power cable hidden behind one of the 5.25" bay panels, I just remove the panel and hotplug a disk when I need it.
coromonadalix:
I have 12 drives in my home server : i use the smart function with a software to read the disks states each week, when a drive is near 75% of of its life expectancy, i simply swap them for new ones.
I dont even use Raid functions, my only real problems where some drive failures related to the firmware, western digital time out ... in the past, was corrected with a dos software, and the infamous seagate ones too. Luckily they were backed just in time, since i saw read writes operation errors spikings, updated the firmware and made some tests, with the fw update you could not read back the drive contents, they became reliable again outside the server once reformatted / repartitioned.
For now i use Western digital green drives, i use capacity, not the speed access, Room temp always at 22-23 degree
I dont use any drive dampening stuff, i just have semi hard rubber footing for my case ( Antec twelve hundred, and its heavyyyyy around 75-90 pounds) i use hotswap cases for all the drives, they are all came with a fan for 4 drives per cage ( true dual ball bearing) the fans are cleaned each month, when they fail (rpm will decrease or stall) i have rpm monotoring i will hear a loud beep warning.
For the case footing dampening guess what loll
Hockey pucks
Oh I did find out with an older server, power it up and down very often was more damageable than leave it always on .......
wraper:
--- Quote from: coromonadalix on March 10, 2019, 03:18:39 pm ---when a drive is near 75% of of its life expectancy, i simply swap them for new ones.
--- End quote ---
What's that? There is no such SMART parameter for HDDs. Either it works fine or reallocated sectors start to occur which means drive is no longer reliable. I guess it's some sort of woodoo figure some stupid app shows for noobs.
--- Quote ---they became reliable again outside the server once reformatted / repartitioned.
--- End quote ---
:palm:
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version