bah, it doesn't make sense to hash the entire disk image: if it matches, ok, otherwise you don't know "WHAT?" got corrupted!
It makes more sense to develop software that hashes every single file, and then checks every single file by mounting the image in loopback.
This is how it works the software I developed for my NAS, and that's because I can't afford filesystems { ZFS-*, BTrfs, ...} that do the same at a low level
I have implemented it in order to prevent the risk of "bit flips", as, talking about HDDs, beyond physical damages of { plates, flying read/write hears, motor, pcb, connector, air-filter, ...}, there’s another threat to the files stored on hard disks: small, silent bit flip errors often called data corruption or "bit rot".
"Bit rot errors" occur when individual bits in a stream of data in files change from one state to another (positive or negative, 0 to 1, and vice versa).
I talked about HDDs, but these errors can also happen to flash storage systems (SSD) at rest, or be introduced as a file is copied from one hard drive to another.