Hardware is more efficient at decoding video than software is, but its still a *huge* amount of computational work, and not free.
Except that the point of the dedicated hardware is to make it
practically free, and it does. The CPU load approaches zero, and because it’s hardware designed to do that task
efficiently, it uses extremely little energy.
It’s not like offloading to the GPU’s shader units, where it’s still code running on somewhat more optimized hardware. The steps in a video decoding path (decompressing, color space transformation, scaling for display) are all things that our CPUs and GPUs have dedicated hardware for, and thus do for practically free, both in terms of CPU load and energy consumption.
And since it appears you’re going to be pedantic: yes, I’m aware that there are instances where one or more of those steps will be done in the CPU, like when playing back a codec that one’s computer doesn’t have a hardware decoder for. But you still benefit from the other steps being done in hardware. Of course, the most common codecs have had full hardware support for years now.
Don't forget: normal code is run by "hardware" too -- the CPU.
Don’t be a smartass, you know
exactly what I mean. If the hardware being used isn’t dedicated to that function, then we speak of running it “in software” precisely to contrast when it’s running on dedicated hardware.
ssh to a remote system would be a lighter load than video playback. Or text editing. For example.
Perhaps. But even the latter could easily become a “bigger” software task, depending on the editor: if it’s anything more than a completely dumb text editor, it may chew up a lot of CPU doing real-time analysis of the text, the way that modern word processors and code editors do.
So while I’d certainly say text editing is a typically “light” load, I kinda doubt it’d be lighter than typical fully-hardware video playback.