For hard drives, we got used with 1024 due to cpu cycles and memory constraints.

Basically back then everything was made with multiples of two, in order to be able to make computations using bit shifting.

Disk sectors were 512 bytes which is 2^{9} , clusters were multiple of 512 for example 4096 bytes which is 2^{12}.

For an operating system like MS-DOS in the times of 8086 and < 1MB of memory, it was much easier, faster and cheaper memory wise to consider a KB as 1024 bytes, because it would allow them to quickly list the file sizes just by shifting bytes

ex Let's say a file size has 123456 bytes ( 0x01 E240 or 0b0000 0001 1110 0010 0100 0000)

If you want to list the KB value, you can just shift 10 bits to the right to divide the number by 1024 (2^{10} = 1024) so you get

0b0000 0001 1110 0010 0100 0000 => 0b0000 0001 1110 00 = 0b0000 0000 0111 1000 = 120 KiB (it's actually 120.56 kiB so technically 121 KiB but good enough for an 8086)

FAT 16 and FAT32 also stored file sizes in multiples of sector size and cluster size so again bit shifting was used to calculate file sizes, remaining free disk space and so on ... it was only natural due to cpu cycle and memory constraints to use multiples of 1024 instead of 1000.

Memory always used multiples as 8, due to the the memory bus on processors being 8 bits, then 16 bits, then 32 bits with 386, and now we have 64/128/256 bit with dual channel ddr4 / quad channel

So memory chips kept using multiples of 8, even though they advertise as 512 mbit , 1gbit etc etc

1