One word: interrupts.
Reentrant and recursive programming requires stack (or if not hardware stack, then a software emulation of it).
That's still no hard requirement -- interrupts can usually be serviced quickly, and usually one at a time. But it's a lot easier to deal with things if they can pile up a modest amount.
You can design a system so that interrupts are used more like events, where multiple concurrent events (even from the same source) can be executing. An example might be a global timer interrupt, which fires off housekeeping activities periodically (reading inputs, swapping tasks in an RToS, etc.). Each of those events is stored on the stack, on top of the interrupt that started it. That interrupt might fire again before the first one has finished, in which case, a non-reentrant version gets its variables trashed!
Not to say functions should be exclusively reentrant, either: it can be very useful to keep a global state, to say for example: if an optional function is still waiting on the stack, skip over it this time. This might be implemented by setting a static (global) "busy" flag within the function (and clearing it at the end of the function), and checking that flag before jumping into the function again.
A universally applicable case of this: checking stack size/position/usage. If you have an arbitrarily recursive program, with no provable limit on stack requirements, and you outright ignore how much stack is being used, then some day, your program is going to steamroll through other memory. Madness ensues.
Which relates to the theoretical concept: a Turing-complete machine has unlimited memory, and can recurse [computably] infinitely. An ideal Turing machine does not need to check for stack size, because it won't run out. All our so-called "Turing-complete" machines are semantically complete, but finite in storage, so they cannot truly compute any computable function. (Example: no one has calculated what Ackermann(999, 999) is, but it's proven computable. Oh, also... we only have finite time to compute something. So, say, while it's absolutely possible to find collisions in any hash function, a SHA-1 collision has been realized only recently.)
So, as programmers of not-quite-Turing-complete machines, we must be aware of their limitations, and dirty our theoretically ideal functions with necessary precautions.
Tim