In my experience the biggest enemy of a hard drive is heat.
In my experience it's a dodgy power supply in one way or another.
In my experience it's cheapness. Server hard drives last much longer. And heat is also an enemy.
I don't know if heat was a factor here. All three drives were in a small mini-atx case, but spaced well apart (about 5cm between each drive) and there was decent airflow.
The last available SMART data that I have for the drive just before it failed shows the max temperature at 45C . No drive was above 45C in the max temp column, and I don't think 45C is too hot for a drive. Maybe it was ?
I was running SMART daily short offline tests, and weekly long tests via cron. SMART logs showed these tests always passed, except the one drive that failed first started to show short test failures at 90% complete then failed within 2 days after that. The second drive to fail never had any errors in the SMART log, except its reallocated sector count was increasing over the years. It always seemed like normal to me, given that it was a high-density 2TB drive, I expected some reallocated sectors to be normal. At the time of the second drive failure, there was 382 pending reallocated sectors (which doesn't seem too high). It just outright failed while running in the degraded RAID. The second drive never failed any SMART offline tests, long or short.
Finally, being ZFS, I was also running a weekly data scrub (which, for non ZFS aware people, means that it reads back all the data, verifies all data on the members of the RAID array, and rewrites and logs errors). That weekly scrub log NEVER showed any errors ever being repaired.
Overall, I'm pretty disappointed in this experience with RAID, but it's not the end of the world for me. Just more of a personal feeling of loss more than anything else. And I'll rebuild it with a mirror set next time and a clear backup strategy using hot swap drives.