A 1% error on the 10k or 1k resistor in my example will have the same effect as a 1% error on the 10M or 1M resistor in the high-resistance version.
Just ignore S. Petrukhin. As we all know, obviously the magnitude of resistor values used does not affect the tolerance of the divider. This is easy to verify by doing what S. Petrukhin suggested - do the calculation. Probably he did the math once and made some simple calculation error and then the wrong idea stuck.
With large values, however, expected and unexpected leakage currents play more important role. In other words, add an external (parasitic) very high value resistor in the mix, and now the absolute value of your divider starts to matter. It all boils down to accuracy requirements vs. level you can control the leakage currents. In low power designs, resistor divider impedances around 1Mohm are common, even in "precision" parts like regulator feedback dividers - or, like in your case, battery voltage measurement dividers. But for some, "high precision" means +/-1% error - that's what the typical linear regulator, or a voltage reference integrated in the MCU - introduces anyway; for others, maybe +/-0.01% is required. In latter case, both very high and low divider values come with their own set of issues.
I use the capacitor strategy with SAR ADCs all the time. People keep suggesting adding random opamps to the mix, but I simply don't see the point, when I don't need the bandwidth. Designers wanting to make themselves look irreplaceable by adding unnecessary complexity are not limited to software engineering, we do see this in hardware, too.