I have a system which implements a USB filesystem, on an Adesto SPI FLASH which has 512 byte block size: AT25PE16 or AT45DB321.
These are 2MB and 4MB respectively.
FatFS is used to make the filesystem visible to the code running on the target, and the USB removable media side is implemented under interrupts using some ST library.
And this works fine.
Now looking at using the AT45DB641 which is 1024 byte block size.
There seems to be a "wisdom", going back decades, that 512 byte sectors are mandatory for Windows compatibility (for a removable storage device such as this). Some background to this is e.g. here:
https://docs.microsoft.com/en-us/troubleshoot/windows-server/backup-and-storage/support-policy-4k-sector-hard-drivesHowever, many years ago when I was messing about with CP/M (!) I implemented a BIOS with two floppy drives which had 2048 byte sectors (this was before the 1.44/2.88MB diskettes came) and one used what was called blocking/deblocking code which translated between the disk physical sectors (2048) and the native CP/M sectors (always 128 bytes). Obviously there was a 2048 byte RAM buffer, with various rules for flushing it to the physical disk. The CP/M implementation manuals gave code (written by Gary Kildall in 808 asm!) for blocking/deblocking. Hilariously, I see this 1970s manual on the bookshelf behind me... In fact the only CP/M systems which did not need blocking/deblocking were ones with 8" floppies (which actually used 128 byte physical sectors). You can laugh at this historical throwback but I doubt this system has changed!
This isn't a FatFS issue because FatFS merely makes the filesystem visible to code running on the target. The USB interface to Windows is implemented under interrupts, using some ST library, and they seem to be mapping the blocks 1:1 between the FLASH chip and the sectors presented over USB.
Does this stuff ring a bell with anyone?