Is there an actual problem that's trying to be solved here?
Not really. Binary to decimal conversion is considered "slow" and requiring "surprisingly much" machine-level code, compared to arithmetic operations, so these can be useful in some cases, but that's about it.
Library implementations, especially various printf() implementations, are written for correctness and not for efficiency, so when you work with smaller microcontrollers, especially 8-bit ones (AVRs, PICs, 8051s), a custom implementation can be useful. More annoyingly, most standard library functions use arbitrary precision arithmetic and dynamic memory allocation during conversion of floating-point numbers.
Even on fully hosted C environments, it turns out that if you read numeric data from text files, the standard C library string-to-number conversions (strtod(), strtol(), scanf() and their variants) become the bottleneck, when you have enough (megabytes) of data. Then, too, "optimized" conversion functions can reduce the load times to a fraction of what they would be using standard C conversion functions.
Yep. scanf() being the absolute dog of them all.
And sometimes the "simplest" solution is actually also the most efficient. As I said before, the simple approach to count powers of ten, requiring "only" up to 9 iterations per digit with just a subtraction and an increment per iteration, will be faster on most targets that do not have a hardware multiplier (with which most optimizing compiler will implement the divide by 10), or on which the hardware multiplier is a multi-cycle operation. In particular, I've used the solution I mentioned earlier on AVR targets and it led to a lot smaller code size, and much faster too. Don't let the fact that it looks too "naive" deter you - looking at the generated assembly if in doubt will help picking the most efficient approach, in simple cases like this.