Incidentally, several other NVRAM technologies are also limited in density. FeRAM can't be smaller than a ferroelectric domain, feature size ~100nm IIRC. So the density is comparable to SRAM (at feature size from a couple fab generations ago). But this also means it's suitable to add onto existing coarse-scale chips like MCUs (e.g., TI's line of MSP430s with FeRAM).
And, I forget what other technologies are current and practical, and how well they scale? (PRAM, MRAM, etc.)
The issues of Flash are largely solved by using metric fucktons of it. With wear leveling and a few gigs to spare, who cares how many writes it can take, if you just need a few megs? (Assuming this is a smaller scale application of course. SRAMs can't be made very big, like, you're never going to get an SD card of it. You'd normally only use NVRAM for some persistent settings and stuff, totaling maybe, like, a few megs, on something like a whole-ass PC? Idunno, what would you store that's not on fixed storage (HDD of whatever sort)..?)
As for persistent storage, HDDs and whatnot, it's always been a matter of wear and longevity; no HD lasts forever, whether that's due to failure of the media, electronics, or corruption caused by software. Backups and regular replacement are just SOP. You can get lucky, sure, there are half-century old HDDs with good data on them still spinning -- but it's the exception rather than the rule, and they also hold a truly negligible fraction of the data preserved nowadays (MB vs. TB in a single unit!).
Tim