Another thing that careful design helps with, is excessive internal state.
It's relatively easy to sit down and start writing code, implementing various edges of the problem, and tacking on more bits until it meets or exceeds the full scope.
The problem is that "exceeds". This is... maybe hard to give a good motivating example for, but uh, say you have one state machine, it has its state variable, maybe it has 27 states, stored in a byte variable (256 possible states). What happens in the other 229 states? Do they do weird things, like are some of the states done by matching numerical value and some by bitmasking and so on? It's the software version of the Karnaugh map "do not care" value -- maybe you get nothing in the undefined range, maybe it's random, maybe it does strange and interesting (or even exploitable) stuff?
And then say you grew the program organically, and now it has two state machines. Maybe they affect each other in weird ways, sometimes twiddling bits, sometimes adding or subtracting values from each others' state variables, or other state in the program. What happens in all combinations? It's a combinatorial explosion...
The more responsible approach might be to integrate both state machines into one; this doesn't solve the combinatorial problem, but fixing one part (the basic skeleton of the combined state machine) may lead to a more responsible design that abstracts away the combinatorial problem (e.g. offloading the additional state to dedicated counter registers, which are only accessed via arithmetic, so the range is well defined and easily checked).
Another general (if even more abstract, or difficult to analyze?) approach, is to write down the edge cases explicitly, and work to those. Like, it's trivial to multiply two numbers together -- your average case is hardly any thought or effort at all. But when does that process
fail? If that product needs to be larger than a few ten thousand, and you forgot that your platform does 16-bit arithmetic (because you were lazy and assumed C's "int" was big enough -- it's 64 bits on modern desktops and phones, 32 bits on much anything else, but 16 bits on 8086(?), AVR, etc.!), you're going to have a bad time! Easy enough to check the edges here: put in min/max values of arguments, maybe signs as well, only a handful of tests are needed to verify result. Now, obviously this gets much trickier in a nontrivial function.
And overflow, the result is undefined in C; though there are few platforms where anything but a modular result happens. Maybe modular overflow is desired anyway, but maybe it should be saturating arithmetic instead -- who knows? Does the program specification say anything about it? If not, it's probably worth escalating that question...
And yet another very general approach: testing all possible inputs. You can take a function as a complete black box, and in principle, check its behavior for all possible inputs. Sure, it's not much, functions with hundreds of bits of arguments are common; but it's helpful when it can be done. These days, this is very tractable (if rather slow in the upper range) for inputs up to 40, even 48 bits. And random sampling of that space, may give useful insights for even more bits in less time (but easily misses things, too -- see Pentium FDIV bug!). Obviously, forbidden or redundant states (like random pointers on a protected-mode architecture) can be excluded from search, easily enough, and pointer targets can be controlled, at least when the pointer is testing within allocated memory. Guided random sampling can produce interesting results even faster, i.e., fuzzing. For example, this is powerful enough to elucidate
undocumented CPU instructions.
And related, one may be able to control the combinatorial explosion of parameter space, by carefully minding the type of that (collective) parameter.
https://en.wikipedia.org/wiki/Type_system#Specialized_type_systemsSupplying parameters independently, is a product type; the number of states to check is exponential with the number of parameters. A sum type, uses the same bits for different purposes, by context. The typed C implementation is, a function which takes a union as parameter; there may be unused bits in some of the subtypes, while the number of states only goes as the largest subtype.
Tim