Products > Computers

SSD slow WriteFile() with "FAST IO DISALLOWED" (Win11)

<< < (3/3)

radiolistener:
Did you tried to execute blkdiscard command for that SSD drive before testing?

The speed slowdown is possible due to many not erased sectors on SSD, it requires controller to perform erase operation during write operation and it takes additional time. If you do blkdiscard before testing it will erase all unused sectors, so the write operation will be executed faster.

I'm not familiar with details on how it's implemented, but blkdiscard helps to improve write speed performance in my case.

5U4GB:
Various comments, firstly check everything @Haenk said, that was also my first reaction to seeing your post.  In particular Kingspec drives are cheap and nasty so the problem could just be the drive, see if you get the same problem with something like an EVO 8x0 series.  Finally, how much have you written to the drives?  The temporary reinstatement of performance after a reinstall, i.e. a complete redo of storage, followed by a slowdown again may just mean the flash wear limit has been reached.

mariush:
I would suspect the constant writing (trickle writing, small writes) prevents the firmware from entering into garbage collection mode and erase/recover blocks of flash memory marked for erase

Flash memory is arranged in blocks (ex 24/32/64 MB or higher blocks of flash, which are then arranged in pages of 512 bytes or 4096 bytes or other sizes) - the controller can write to an empty page, but can't overwrite that page - in order to make the page available for writing the whole block (64 MB or whatever size it is) has to be erased (and that erase process wears out the flash memory).

So each time something has to be overwritten, the controller just finds an empty page somewhere in other parts of the flash drive, copies the old page with the changed bytes to a new page , marks the old page for erase, and updates a lookup table that says contents previously on block x , page y is now in block m, page n .... (off topic that lookup table is usually cached in ram at startup and makes dram based ssds a bit faster, dram isn't used to cache writes)
 
When there's a threshold of erased pages out of a block reached, for example 90% of pages from a 64 MB block, the controller copies the remaining 10% of pages in random places and then erases the block and makes it available to the system.

If the controller doesn't detect some idle time, it's possible it never erases blocks so the more time it takes the harder it is for the controller to find empty pages to put content into, and it may not have enough empty blocks to convert from TLC to pseudo-SLC to faster write speeds.

One potential solution would be to run the TRIM command (Windows defrag tool will do it on SSDs instead of defragmenting, TRIM will force the drive to do cleanup)

Most drives use pseudo-SLC to cache writes - what they do is take one of those blocks of 24/32/64 MB of flash memory where each cell holds 2 bits for MLC, 3 bits for TLC and 4 bits for QLC, and convert the block to SLC mode storing just one bit in each cell - so for example 64 MB of QLC becomes 16 MB of fast SLC cache, 64 MB of TLC becomes around 20 MB of SLC cache etc etc.

You write stuff, controller puts it in SLC cache and afterwards at idle time, it will slowly move the data to TLC memory areas.

Blocks in pseudo-SLC mode wear out slower ... maybe 10k erases QLC is rated for 200-600 erases, TLC goes from around 1000 to 4000 erases, MLC goes up to around 10-15k erases, SLC goes from 10k to maybe 30-50k erases, some small slc chips can do even 100k erases.

-

OP , if your backups are a few GB or less, I would suggest seeing if you can spin up a ram disk before your backup - for example on Windows I use ImDisk toolkit and can create ram drives with or without physical backup (disk image).  Without a disk image, I can set up a 10 GB ram drive (because I have 16 GB of ram) in seconds and quick format it to NTFS and you could compress the data to the RAM drive, then simply copy the archive to the SSD in a burst.

You may also take advantage of this to sort your file types and then pack together your files in a TAR archive (7zip can create TAR archives) and you could run a DIFF between previous backups and current backup. I like XDelta  (open source tool) but there's other binary diff tools.

Keep each day's archive for 30 days, after that you could keep only 1st day of the week or month and then for the next days keep the diffs only (so if you want friday's backup, generate from diff between monday and friday). 7z in store mode may also work, but 7z format may shift some bytes around that could make it less efficient to do binary diffs.  TAR is simpler, 512 byte blocks, if your files are sorted in same order then a binary diff would easily store only the differences.


Georgy.Moshkin:
An update...
Internal SSD #1: Kingspec nt-512 "2280 NGFF" (installed in notebook's motherboard slot)

External SSD #2: stmagic 512G (can't find model on enclosure) in "cool fish" usb3.0 jmicron chip enclosure - works slower than #3 for backups
External SSD #3: CUSO C5S-EVO 480G in "cool fish" usb3.0 jmicron chip enclosure - works pretty fast for backups

So, I was wrong about my assumption that it is not an SSD problem. I needed a backup, but this time speeds dropped to 14mb/s when copying from SSD #1 to SSD #2. I connected internal SSD to table PC motherboard (win10) and made a backup at only around 20mb/s, which is obviously an SSD problem.
I put SSD back to the notebook and copied all the data (140gb) to temporary directory at something close to 20mb/s. Yes, no external drives this time. I copied data from #1 to #1. One of the reasons was that I wanted to see how TRIM results ("defrag E: /L /U /V") will change after I delete large chunk of data. I performed trim through "Defrag and Optimize" on #1 before copying, performed a reboot. For some reason, copying from #1 to #1 was still faster than copying from #1 to #2. Then, I deleted original directory and renamed temporary one, so everything looks as before. To my surprise, copying #1 to #2 is to 40mb/s and copying from #1 to #3 is 70mb/s. I did full 140gb backup at 70mb/s, it took around 30 minutes. And then used bootable flash to do a full backup (system+data) on a slower SSD #2.

About disabling services, etc. It helps, but it seems that windows always pops some new tasks if CPU load is high. I spent a lot of time with process monitor, process explorer and resource monitor.  E.g., first it was defender, then indexing, then some crazy registry activity related to DNS and TCP/IP. After I disabled everything one by one, serviecs.exe started to check registry keys related to disabled DNS-something service. I still see a lot of NTFS journaling and some LOG/LOG2 files saved in registry directory. Currently, I run Win 11 23H2, but speed improvement after copying SSD #1 data on itself (#1 to #1) convinced me that installing some old build Win 10 will not help, and it is most likely SSD problem and not OS problem (I believe OS file access routines can be modified in some way to optimize for speed on such SSD drives).

I still think that my internal SSD is a good one. I need more proofs and have an idea of creating small application which pumps data from one SSD to another. Idea is to make multi-threaded app which finds an optimal number of overlapping threads for fast transfers of big and small files. Something is not right here. I am going to speculate. At first, I thought that there is some broken NTFS journaling or this LOG/LOG2 files writing activity affects performance. But the latest experiment shows 5-6 times speed improvement when copying same 140gb chunk of data from the same SSD to external SSD (on a 16gb RAM notebook, it can't be cached). So, conclusion can be that now first 140gb directory occupies "bad" slow sectors, and the second copy of 140gb directory occupies "good" fast sectors. BUT, when I perform sector-level backup using bootable flash drive (aomei backupper) backup speeds are constant - with no dips. But even this speed was far from ideal after some time. What the hell causes this slowdown? Need more experimentation, e.g. use non-windows bootable sector backup app to check if it is not some ntfs journaling trickling in background, not sure if it is possible to perform a backup of unmounted drive. Maybe something wrong with SSD controller algorithms.

To sum up:
1) Disabling services helps, but it seems that windows activate some other background tasks more frequently if there is low CPU/disk load.
2) There is some problem with SSD controller or with NTFS/OS disk access routines. After I made a copy of 140gb directory on the same SSD, files from new directory can be read at 70mb/s compared to 14mb/s from old directory. But it doesn't affect sector-level backup speed using bootable flash drive (winpe aomei backupper). Note that sector level backup speeds are still lower than 70mb/s.
3) It would be interesting to see if it is possible to make copy utility tests if files can be copied at >100mb/s speeds between two ssd drives drives. I am talking about this particular setup with N5095 based notebook PC. I have table PC with NVME drive which copies files very well at >500mb/s and shows something close to 1000mb/s in AS SSD benchmark.
4) 146 mb/s is gone?! Can't reproduce this speed by re-creating partition and reinstalling the whole thing. I still wonder how the hell my 10+ years old Toshiba notebook pc with less ancient Galaxy 120gb SSD demonstrated something close to 80mb/s when I performed backup on external HDD. I remember how backup speed slowly dropped to 60mb/s to the end of backup. I remember 15-20 min backups and how I wondered why nobody I know doing this, because it is so easy. Just backup the whole SSD disk with data in the morning (windows XP + all data on the disk).

radiolistener, no, I haven't tried to force trim before yesterday. But I've noticed that "Defrag and Optimize" reported around 10 days since last optimization (I assume it means TRIM, at least pressing button displays message about SSD trim operation is in progress). In yesterday's experiment, I used trim - copied data from one directory to the second directory. And new copy of data read speeds are now much faster. Not sure if it is related to running trim right before making this operation. It turns out that I have some problems with read speeds too.

5U4GB, my internal SSD age is around 1 year and 3 months. Not much writing and plenty of free space (225gb out of 476gb currently is free). At first, I was skeptical about this idea of wearing out. Not so sure now. Maybe something is wrong in some SSD area, e.g., NTFS tables occupied some "slow" sectors and controller did not relocate it properly, or something like that.

mariush, I usually use 7Zip with zero level compression ("0-Store"), because it provides fastest backup speed compared to compression. I used 7Zip-Std modified version for some time, because it provided compression fast enough to even improve copying speeds - less writes. 7Zip-std uses Zstandard, a "Fast real-time compression algorithm". Process you described sounds very interesting and also makes me worry that for some reason my SSD controller starts to perform some erroneous maintaining operations, e.g. I often observe 100% disk load with read speeds dropped to 14mb/s. Interestingly, changing AHCI affects this value. It's either calculation formula differences between intel and Microsoft drivers, or AHCI drivers different some other way.

Navigation

[0] Message Index

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod