This is a question about why...
I have never paid much attention to home UPS'. I was responsible (prior to retiring) for many data center systems, but for home use I just considered them a fancy surge suppressor and something to ride out brief outages, and gave them little thought.
I thought they were all cheap versions with nasty square waves, but realized my last purchase was a sinewave line interactive sort, and decided to look more closely (boredom was involved).
I was expecting and did confirm that when on battery the ground maintains continuity back through the building power.
I was pleasantly surprised to see a relatively clean sinewave, even under substantial load (though to be fair it was a resistive load since I was lazy at the time).
What I was not expecting was to find was that it was sort of a split phase (probably not the right term), with neutral and load about 60v each (RMS) relative to ground and inverted phase for 120v between (RMS).
I realize nothing bad should come of that, but I am curious about the approach (rather than using ground as a reference for neutral).
Is that because of restrictions of bonding neutral and ground other than at the panel? (I would think it would not apply, being separately derived, but that is an area of the NEC that I know little about).
Is it to reduce the voltage for some reason on each one relative to ground? (I guess in principle a short to ground thru a human might be less deadly, but the short to neutral remains the same).
Is it because it is easier to make that way? (i.e. cheaper)
Something I haven't considered?
Apologies if this is a silly question, it was just a bit of a surprise.
Linwood
PS. Attaching waveform if anyone is curious.