There are a lot of good suggestions in this thread already but the one thing I'm missing is making an estimation of the amount of processor power needed.
This, plus, often even more importantly, what peripherals are needed, what are available, and if peripherals mentioned on datasheet can actually be used or not (due to limited pin routing options, limited DMA mappings available and so on). There can be quite serious consequences on the choice of the whole product family, or even the manufacturer, which can then affect the code quite a bit, because sometimes you don't want and cannot do layered abstraction but some of the hardware "leaks" into the higher level code, no matter how unelegant that sounds to software scientists.
Yep. I often use a microcontroller to interface to sensors and temporary tools, providing the data using a simple USB Serial connection. To find out the practical bandwidth limits of that, I created (and recently updated) an
Arduino Teensy sketch and corresponding Linux C program, to measure the practical amount of data a Teensy 4.x sketch can provide to a normal Linux program using USB Serial. To ensure there are no shenanigans, it uses an excellent pseudo-random number generator – high 32 bits of Xorshift64*, passing all tests in the BigCrunch tests, noting that even Mersenne Twister fails one or two! – to generate the data on the Teensy microcontroller using a seed provided by host, with the host both reading the data and verifying it is correct.
(For anyone interested, depending on the host, one can expect Teensy to be able to provide 23 to 29 megabytes per second over USB serial, in practice; but only if you write arrays of data with at least 32 bytes at a time, instead of single bytes or characters. The above-linked example is easy to modify to test specific sizes of write chunks. It only tests Teensy sending data to a host, though; with only the initial request in the other direction.)
The underlying transport (USB 2.0 High Speed) is 480 Mbit/s, but is not full duplex (so transfers in different directions reduce the total bandwidth available), and there is also quite a bit of overhead. You need dedicated chips or very finely-tuned USB subsystem handling to reach the 40 megabytes per second or so that USB 2.0 High Speed can do in practice, so a generic i.MX RT1062 USB implementation reaching 23-29 megabytes per second, with actual generated data, with standard USB Serial transport, gives a practical idea of what is feasible. (With USB bulk transfers, one can probably get somewhat better as the tty layer is a bit of a bottleneck for USB Serial in Linux and BSDs, but I haven't bothered to check. Yet.)
My point is, I (have to) do practical tests to find those limits. I do them early on, in the design phase, because the practical results feed back to my design, and those design changes often need new practical tests. I don't need
exact bounds, just their practical estimates.
This isn't really test driven development, because there is no development yet at this early design phase. Test-guided design, TGD, maybe?
