I doubt that speed of converting integers to strings is very often a concern in embedded uses, even on a 1 MHz 6502 let alone a 24 or 48 or more MHz Cortex-M0. It's probably only updating a small display a few times a second (no point in doing it more often than a human can observe and react to), or writing to a log file where you don't want to be spewing out MBs of text every second. Something like 223 cycles (maybe 5-10 µs) doesn't really need to be reduced, and if it was 10x more probably would not matter.
The code size has got to be much more important in most cases.
Not necessarily, let me give you an example.
[Hypothetically]
There is some kind of sensing unit (like smart meters (gas/electricity etc), hand-held meters, etc). With an MCU solution, running from batteries, on some kind of low cost MCU.
Although it might be able to run at 48 MHz or even 72 MHz etc, it may well be running at 500kHz, or 32,768Hz, maybe even only briefly turning on to make measurements, then going back to
sleep mode. To minimise power/battery consumption.
Let's say the LCD (for very low power consumption) display is updated every 1 to 5, times a second.
If during development, the LCD display sometimes reportedly shows weird results, perhaps every few hours, so maybe around every 10,000 readings. Indicating a possible software (much less likely hardware) bug(s). The developer(s), probably don't want to watch it and wait around for several hours, when the glitch(s) occur.
So they may want to use a very high (for battery powered MCUs) speed port (such as the SPI or QSPI), at a greatly speeded up rate (maybe 100,000 times a second, just for the diagnostic debugging test work), to help them spot/diagnose and hence hopefully fix the (otherwise) very rarely occurring glitch.
Or they may want to send hundreds of thousands of ADC readings per second (if the hardware is up to it), to a PC, to try and see the glitch occurring.
As you stated (in another post), it might have a very small RAM and flash, making direct logging on the MCU infeasible for any length of time.
You could argue, that they should use binary to avoid the processing delay. But text is a lot friendlier, and easier to search for faulty/glitchy patterns. Especially if there are separate people/departments/organisations at either end (e.g. An internal software department and external testing and verification services company).
So, a text based communication, is much less likely to cause confusion, and tends to be easier to fix and diagnose, if people get things wrong.
Therefore the conversion speed, of integer to text (string) format, may need to be as fast as possible.