A couple of reasons. The most fundamental is that, many components depend on the applied voltage difference (or current flow), including direction.
Alternating polarities can easily be generated by AC-coupling the signal with a capacitor, inductor or transformer. These types of applications don't usually benefit from negative supplies anyway. For example, audio amplifiers -- for much of history -- have been AC coupled, single supply. The DC is turned on and off to varying degrees, and the DC filtered out to get nice bipolar AC at the output.
AC applications sometimes benefit from bipolar supplies. Startup transients (the establishment of those DC offsets!) are often undesirable. Early, "second generation" I guess you could say, transistor audio amplifiers were constructed as a power output stage (usually a quasi-complementary emitter follower), biased at half the supply voltage, with a series coupling capacitor connecting the output, to the speaker, to ground. When power is turned on, the capacitor starts at 0V, so the speaker gets a whack of VCC/2, and similarly when turned off (the pulse being negative instead). The input has the same problem, where the input amplifier might be biased anywhere from a few volts above ground, to VCC/2 or something like that, and that needs a coupling capacitor that has to charge and discharge as well.
Noise is also a consideration. Power supply noise is introduced into the signal path by those bias-setting voltage dividers and such. To some extent, split power supplies can be faked using capacitive dividers; the entire circuit, from input to output, is referenced to the capacitors' midpoint. The circuit no longer has to be AC coupled, but it would be a good idea, since drawing too much current at a time from one supply or the other causes the capacitor divider voltage to drift (sausage effect, a persistent positive load might drag a +/-12V supply over to +2/-22V, causing time-dependent clipping).
The most important use is for relatively sensitive, DC coupled circuits, where ground is used as a voltage reference. This saves the trouble of building a differential sensing amplifier -- assuming ground is nice and stable and "ground", of course.
Components that need many voltages and references can be hard to work with, and a compromise is usually chosen. Take a basic electrostatic CRT for example: if the cathode is at 0V, the first grid will be 0 to -100, the first anode (focus) around 300V, final anode 2kV (+/- 100V for astigmatism), deflection plates 2kV with +/- 100V differential (for deflection).
So, intensity control (grid-cathode voltage) is referenced to the cathode end, and deflection is referenced to the anode end.
Electrostatic deflection CRTs were used in TVs before magnetic deflection was introduced (this is like '30s era). Because TV deflection signals are AC, coupling capacitors do a fine job. Unfortunately, they also have to withstand several kilovolts, which gets expensive, and the HV has to be well filtered, otherwise that noise/ripple gets coupled into the display. Video (which is DC coupled via DC restoration) is driven into the cathode, which is near ground (0-100V), with the grid grounded (to an adjustable voltage, for brightness control).
The alternative would be a negative supply, with the cathode at -2kV, the video coupled down to -2kV, and the deflection plates near 0V (probably 0-300V, being a typical tube amp voltage range). The detected video signal is DC coupled, but it can be AC coupled -- the DC can be recovered using a DC restore circuit (little more than a diode). That's just one more thing to have hanging around at high voltages, and having to build circuitry on top of a high voltage is something better avoided.
An oscilloscope has the inverse problem, that intensity isn't a big deal, but deflection (linearity and bandwidth, including DC coupling) is paramount. It would be insane to build the entire deflection amplifiers up at +2kV and somehow still couple the signal into it, so instead, the cathode goes to -2kV. Intensity modulation usually amounts to blanking, which has a low duty cycle, so the lack of DC coupling isn't a big deal. (Scopes with a "Z" axis -- variable intensity -- have to address this anyway, and usually do it by providing a second complete HV supply, just to establish the cathode-to-driver voltage! Having to couple the signal through all this circuitry costs money, space and potential bandwidth.)
Fancier CRTs can end up with even more supplies, for example a ("second generation"?) electrostatic CRT, typical of most Tektronix tubes, have something like -2.5kV cathode, deflection plates (normal plates, or distributed transmission line types) near 0V, and an accelerator potential of perhaps 5-10kV. Fortunately, no circuitry is needed up at the UHV!
But then there's Trinitron CRTs, which have a pre-anode sort of thing at, say, 31kV, and the final at 33kV. So a 2kV power supply is needed on top of the first 31 (or below the 33). This was often a small flyback transformer (it doesn't need to deliver much power, a few watts) encased in an awful lot of potting (it has to do it with 30kV+ isolation!).
Occasionally, physics experiments get even more difficult, with, oh I don't know, maybe a pulsed electron or ion beam source at megavolts negative potential. These sorts of situations tend to be inductively or optically coupled, the pulse signals themselves being coupled with fiber optics. Putting circuitry at such potentials isn't as much of a problem, since the hardware is big and expensive anyway, so Faraday shields keep fields and noise off the stuff.
Tim