Long tale cut short, pathologically bad disks sometimes happen.
When I was a Customer Engineer for HP, we had a large K-class server (think two and a half racks of server+storage) running Oracle that occasionally reported memory corruption and would core-dump the production databases when heavily loaded causing a nationwide outage.
The server's memory ECC logs were clean - these boxes scrub memory when idle, and no memory corruption was seen. Vendor support cases were raised, outages arranged, diagnostics were run, and cases were escalated to highest levels in both HP and Oracle and no cause was found.
After a few months of ongoing problems it was replaced with a V-class (a server weighing about 220 kg, with a multi-million $ price tag, and a very deep discount), and the old K-class was re-purposed as test/dev box.
After the K-class was reinstalled, the customer started getting file system errors - and not memory errors . I wrote data to each of the disks and checksummed it, and found out that a single Enterprise class SCSI disk was writing OK, then would silently corrupt the data on read.
Looking back through records I identified that disk had been configured as a swap device when it was in production. So when heavily loaded it would swap some of the database's data to swap, and a while later a corrupted copy would be swapped back in, crashing the database...
We sent it away for failure analysis, and the result came back that part of data path in the SCSI interface didn't have parity, CRC or ECC, and was flaky.