NTFS file system may not even reserve a sector for a file, if the amount of data is too small.
I don't know the amount from memory, it's something like less than 155 bytes if I remember correctly, basically if the file size is less than this, there's a high chance the data will be in the file table where the metadata is ( file name, dates modified, created, last accessed, security descriptors/permissions, owner etc)
See
https://web.archive.org/web/20210506084930/https://docs.microsoft.com/en-us/archive/blogs/askcore/the-four-stages-of-ntfs-file-growth - the original link is still live, but the embedded pictures are gone, so linking to recent copy (6th of may) that still has the images.
You can also Windows API functions to preallocate disk space to a file, but I'm not sure it guarantees ONE continuous block of sectors ... I think it tries as best as possible to get big empty blocks.
See SetFilePointerEx() and SetEndOfFile() and the SetFileInformationByHandle functions ... here's a couple links:
How can I preallocate disk space for a file without it being reported as readable? :
https://devblogs.microsoft.com/oldnewthing/20160714-00/?p=93875First answer :
https://stackoverflow.com/questions/7970333/how-do-you-pre-allocate-space-for-a-file-in-c-c-on-windowsFAT32 is probably simpler but I'm not sure you're guaranteed to get a continuous chunk for your file... With NTFS you could probably reserve it some way, not sure there's an API for it ... the only other way I can think of is to work like a defragmenting tool and get direct access to the drive and find the largest unused area to create your file.
Yeah ... and mechanical drives CAN reallocate sectors, if they become too unreliable. As for SSDs ... they're by design not continuous, because data is shuffled around to extend the life of the ssd.