There is a mistake in the
dd invocation. One that may destroy your data. This is one of the two primary reasons I so strongly drive people away from using
Data Destroyer until actually needed: notorious misunderstanding of tool’s operation and its arguments.
dd is a prime example of
magical thinking among *nix folk. Not leading to catastrophes only thanks to lucky coincidences.
The
sync in the above invocation is randomly adding runs of zero bytes in data being copied.
(1) You might have meant the
sync oflag (not conv), but for this
bs is wrong. It is specified in megabytes instead of mebibytes. While it will not prevent data from being written, metric units will not be aligned to sector sizes (or erasure block sizes on flash media). That will lead to increased wear and may cause lower performance.
(2) But to make
bs being respected in a way suitable for
sync, one also has use GNU’s
dd and its non-standard
fullblock iflag. With all these options set correctly, larger block sizes are still preferred to not disrupt storage medium’s internal caches and (in flash) give allocation algorithms some more air. They are not designed to see unneeded, frequent flushes.
I’m not an expert on sotrage technology. Perhaps DiTBho could provide more insight in this end of the issue. I am also expecting I missed half of the problems
dd may cause in this scenario.
(1) The technical definition of this flag is padding incomplete blocks of data with zeros before passing them to the write syscall. But without right understanding of kernel handling of specific devices and carefully chosen arguments it becomes random. And it never makes sense in restoring images.
(2) “May” because it’s kernel, not
dd, that does the I/O. Kernel’s I/O strategy is likely to provide enough of cushion to make disruptions from
dd negligible.