Trying to sort out an issue with an NVME drive. The problem is that it's slow. Really slow, like sometimes <1MB/s during backups. This is read speed and is difficult to pin down. A Crystal Disk Mark test shows nothing untoward, but do a backup and after a little bit of full speed stuff it slows to a crawl. Eventually I replaced the SSD with a WDC job, and things are back to being fast again.
Except... I ran the HD Tune benchmark on both the current WDC and previous Silicon Power, and noticed that read speed is slower where there is data. On the SP, which was the problem drive, it is really really pronounced so I thought there was some fault in the drive. But now I notice that the WDC is showing the same symptom, although in use it's not apparent. The attached benchmark for the WDC illustrates the issue.
The HD Tune benchmark is really aimed at spinning disks, so it starts at sector 0 and then reads sequentially to the end sector. It ignores any filesystem and just hits the disk direct, whether there is actual data there or not. The graph shows the start of the drive on the left, the end on the right.
This drive has 5 partitions with a little free space for expansion between them. The dips correspond to partitions, and the peaks between them to the free space. Only the first 500GB is in use (the rest is for expansion/wear). Seems pretty clear to me that where there is actual data the read speed is slower than where there isn't.
I tried to force CDM to use the trough for benchmarking, but it writes the data it's going to read first, so of course it's going to be a relatively unused space. It also only shows an average, which can hide what we can see here.
I find it strange that this effect should exist. Am I missing something that might explain it, or is it a real thing? I note that SATA SSDs don't show this, but then the read spead is substantially slower than the chip capability so may be hiding the same thing.