There are some algorithms that become extremely cleanly expressed when you use recursion. Anything based on trees or graphs, for example.
I "grew up" with factorial, Towers of Hanoi, and maybe sort, as example recursive jobs, and I never really understood what was the point. Then, much more recently, I took a modern class in algorithms that covered tree algorithms (that didn't EXIST when I was in school!) and had one of those "ah hah!" moments where recursion suddenly made a lot more sense.
Now, most of us here are EE types and write pretty deeply embedded code. In fact, we don't often come across cases that benefit much from the sort of algorithms that recursion makes easier - A typical microcontroller with 128k of RAM can bubble-sort or linear search any possibly-contained set of data in reasonable time. Work on something like a packet router with tens of thousands of routes (and enough memory not to worry about THAT restriction), and you start to worry about finding those N*log(N) algorithms instead of the N2 (or worse algorithm.) And that "log(n)" part typically comes from something that can be implemented with recursion... (after a ~35y career, I had the somewhat depressing realization that I had spent nearly all of it improving the implementation of algorithms, or finessing my way around them, rather than developing BETTER algorithms. Not "Computer Science" at all, really...) (Heh. I took another class recently that pointed out that using the simple recursive algorithm to generate tables of Fibonacci numbers or Factorials is particularly silly, because you're constantly re-calculating intermediate numbers that you've already calculated.)
Now, A well-designed embedded program will keep careful track of recursion depth and resource consumption. Or use algorithms that are well bounded and understood. (I used to like a recusive decimal output function, even on moderately small systems. And using some form of recursion for parsing user input is very common...)