I'm testing a bunch of power supplies for an article I'm working on. I made a number of measurements and realized that my instrumentation choices weren't the best, so I switched things around to optimize the results and now I am truly confused. This is a question deep into EE territory....
My test is to measure input and output parameters vs load in 10% increments from 0 - 130% of rated load. I'm using a
Kill-A-Watt unit to measure AC input (voltage, current, watts, VARS and power factor), a Fluke 45 to measure output voltage and another meter to measure output current. Additionally, because the little 5v switchers draw so little current to be at the bottom range of what the Kill-A-watt can measure, I'm also using a meter to measure AC current.
I did a bunch of readings with my Fluke 79 Series II measuring AC current but after I completed those readings, I realized that the Fluke is not true RMS, and with the waveform of the switchers, this is probably a requirement. I switched meters around, using a BK Precision 391A, with is true RMS, to measure the AC input current.
Now I am truly confused by the results. If I look at input power calculated by multiplying the current read on the Fluke (non-true RMS) by the AC line voltage, the calculated result is close to the "watts" indicated by the Kill-A-Watt and my efficiency for a newish 12 volt switcher is in the mid 80% range.
On the other hand, if I calculate the input power by multiplying the BK Precision current (true RMS) by the line voltage, these readings are consistent with the VAR readings of the Kill-A-Watt, and my efficiencies are around 45%.
It doesn't make sense to me that using a true RMS current reading should give a result in VARS since it knows nothing of the power factor, etc. Is there some flaw in my logic? I don't have that optimal 4 DVMs that Dave recommends or I'd use both meters to measure input current simultaneously to verify the measurements
Thanks for any ideas,
Jon
The data is shown below.