+1 to null measurement.
You basically use a variable power supply and adjust it until it equals the unknown voltage. The meter is actually connected between the unknown and the variable supply. When they are equal, the meter shows zero. In this state, there is "zero" current through the null meter so no voltage drop across the high impedance of the unknown source. Then measure the voltage of the variable supply (which is easy since it is low impedance) and you have your unknown voltage.
This is also the technique used to measure Weston cells, yesteryear's voltage standards/references. They could not tolerate even a few microamps of current at their output, so the null method was used to reduce the measurement load to near zero.
Many bench meters do "high impedence" measurements up to +-10V, others up to only 2 or 3 V. One exception is the Keithley 2001 which can do high impedance measurement up to +-21 VDC. That "high impedance" on this meter is specified as >10 GOhm (how much greater? who knows). That can still result in nearly 1% error, just from the impedance itself, without considering the input bias current which is not zero. Bias current error can be estimated by connecting a 50 Mohm resistor (your source impedance) across the terminals and noting the resulting reading. The null technique can do better, without the 21 V limit.