The hardest part about unit testing is writing testable code.
The amount of effort you would need to go to, to write testable code in MCU world would probably make a "wide coverage" impractical.
Note. I say "testable code" with specific meaning, which is what unit testing is about. It says nothing, absolutely nothing about whether the code 'functions' as per requirements. The idea is just to test the code does what the developer intended it to do.
When you come to testing functionality, there is hardware involved, so it becomes an integration test. When everything is connected up together that is a system test. When real users are using it, it's a user acceptance test. Then alpha, beta, gamma etc. etc. etc. ad.inf.
Trying to pick a suitable MCU world example... say DMA'ing UART buffers for Tx and Rx. You do some detection, pattern matching and formulate some response and send it.
In most MCU projects people just start writing hardware UART code and then the "business logic" just grows like moss around it. Completely un-unittestable. At best you can integration test bits of it with a pyschical test harness maybe, but you are more likely to just skip all testing all the way through to full system test. This is fine until you have 2 or more engineers working on the thing, or even one developer with multiple change lists open. Then it ruins you day and your week.
In the Big Iron world, assuming we have no internet connection to go download a framework that does this already, already fully tested and in a full RELEASE stable state... assuming we don't have the memory either.
I/We would start by splitting it into layers.
* Code which does the analysis, pattern matching and produces the correct responses - this code knows nothing about buffering or UARTs.
* Code which manages buffers. Provides them, cleans them, queues them, blah, blah.
* Code which sends and receives buffers via UART.
Not only does these 3 layers allow you to test the top two in "code only test harness", aka unit test, but you can swap any layer component for any other which complies to the same contract (header file).
The bottom one can also be tested much lower level. You just need to feed it buffers, so a proper integration test of the UART hardware access layer is possible.
The "isolation of concern" applies not just to the code, but the programmer and which component he is working in. As long as it meets it's contract it is said to be good. A unit test that proves it meets it's contract makes it formally tested to do so. You are only testing that one bit of code, not the whole application. If the whole appliction is broken, verifying if a component is function correctly in isolation is very advantagous.
You do have to keep remembering where you are though, what memory footprint you can afford, how much of the extra reference hops and some unnecessary protections cost in terms of cycles where that is important.
The biggest, by far, advantage of unit tests, which is so often forgotten is not that they prove the code works. It's that in 6 months time, when someone just did a 3 week long refactoring... that the code STILL works.
Analogy with electronics.... your "units" are ICs. Except that you can make your own. So while scattering a bunch of passives and discretes all over the PCB might be required in RF stuff, most people would prefer to get an IC which encapsulates that functionality. If you were in a lab capable of making fab'd prototype ICs, you could see the appeal of pushing functionality into ICs where they can be tested in a repeatable way and then manufactured in a pre-tested, quality form and re-use in other prjects. Your ICs are your units and the test hardness for them the unit tests. Having those ICs run with the minimum amount of external support is the art of writing unit testable code.