I had the infamous ST3000DM001 die after a couple of years. Luckily I already had replaced it with WD Red 3x4TB ZFS array. It was only used as scratchpad, unpack disk, and cable TV recordings that, at the time, I was able to do. So nothing critical lost.
However, if you dive into the specs of HDDs, you'll see that WD Red line-up isn't also without issues. There was the recent scandal of them swapping CMR for SMR technology, which is the last thing you need in a RAID array once a disk eventually fails. Then going further, they are still releasing higher capacity drives (up to 18 or 20TB I think now) on the same workload conditions as the smaller drives. I think the yearly workload for the drive head is 180TB/yr, which counts for both reads and writes. That means you can only fully read/write your 18TB drive 5 times per year. So if you do a minimal 1x write-then-readback test before RAID deployment, then that's already 1 out of 5 R/W cycles spent. ZFS also does monthly scrubbing of the data, so *only* that would exceed the workload rating.
At that point you're really better of with an enterprise drive, or perhaps even solid-state storage, even QLC has better endurance, but is still alot more expensive/GB...
Anyhow, no disk has infinite lifetime. I still have a Samsung F1 750GB disk that *works*, but it's speeds are dropping every year. I'm still amazed it works after 13 years of daily use. I took it out of my machine last year, as I was swapping my machine into a smaller chassis which only had 2 3.5" drive bays.