It's fairly common to write a specific value to a known location as part of the programming process, then to test for that instead.
You can be fairly sure that the memory isn't going to initialise to (say) 0xDEADBEEF at every location. So, in your start up code, check for that unique signature, and if it's not there, program the default configuration.
Flash technology may always erase to the same state, but there's no rule that says the data actually stored can't be inverted, if it's convenient for whatever reason. Erasing to 0xff is usual, but 0 doesn't surprise me. In theory there's no reason why it couldn't erase to any other pattern (ie. some bits inverted, others not), though it's usual to have a rule which says bits can always be cleared (or set) without the need to erase a sector first, and that wouldn't be possible if the bits weren't all physically stored with the same sense.
As an aside: I was caught out a while ago writing a boot loader for a PIC, which has error corrected Flash memory. On these devices you cannot write any Flash location more than once without an erase, because there are hidden check bits that get calculated and stored on each write. If you write to the same location twice, the check bits end up as the logical AND of the two possible values, and so they don't match the data that ends up written to the Flash.
The result? Write a block of data, reserve a few bytes with 0xff, then go back and fill in a checksum within the same sector, and the CPU will crash when it tries to read the checksum to verify it. Took a while to find that one, given that there's absolutely nothing whatsoever wrong with the code doing the read.