I started a
thread about this a couple of years ago. Reply
#1 includes the signed 16-bit integer to decimal, backwards, via divide-by-ten and modulus, as an example. It also shows how to do it very fast on architectures with fast subtract-with-carry, without relying on multiplication or division at all, as well.
While both avr-libc and newlibc include
itoa(), it is not a standard C function at all, so really, stdlib is not exactly involved here.
(The actual non-
stdio.h string-to-numeric conversion routines in the standard C library are
strtol(),
strtoul(),
strtof(),
strtod(), and since C99,
strtoll(),
strtoull(),
strtold(),
strtoimax(), and
strtoumax(). (There are also corresponding functions for wide character strings.)
Unlike e.g.
sscanf(), these do report errors in input. These are of course inverse to how
itoa() etc. operate; all standard string construction operations to generate a string from arbitrary integers are in
<stdio.h>.)
_ _ _ _ _
There is no reason for the "haters" hyperbole, even in jest, exactly because the standard C library is not an inseparable part of the C language.
The C standards specifically define "hosted" and "freestanding" environments, with "hosted" the one most consider "proper C", with all of the standard library features available. "Freestanding" environment is one where the C standard library is not available, and only a subset of header files (basically those provided by the compiler for the target architecture, in practice) are accessible; things like
<stdint.h>, for example.
That is also the reason for that thread, and my relatively scarce participation in "new programming language" threads. If we understand the development history and pressures involved in the standard C library, instead of starting from scratch, we can simply replace the standard C library with something better and get most of the way there. (Indeed, I have come to suspect that a single base language change regarding pointers and arrays would suffice to let the compiler detect all cases where buffer underrun or overrun is possible. That, combined with a "replacement standard library" using knowledge hard-earned in practice over the last three decades or so, would be a huge leap forward, in my opinion.)
That thread I started involved some ideas regarding string construction in very limited or constrained situations, i.e. using minimal resources, exactly like when one is programming microcontrollers, for example. Aside from reserved function names and naming, it is perfectly "standard" C to use the freestanding environment, and implement your own base library. Usually, some things (syscalls, when running under an OS kernel or hypervisor) do require extensions or external implementations or extended inline assembly; I personally favour the last (with GCC and Clang).
The same applies to C++ as well, except that C++ leaves almost all of the freestanding environment implementation-defined; and this leads to many freestanding developers to actually use a subset of C++ or mix of C and C++ freestanding environments, especially on Harvard architectures, as the standards have not grown separate address space support and it requires compiler extensions, with Clang supporting them well in both C and C++, but GCC only in C (leading to all of the oddness wrt. strings and flash memory accesses in Arduino, which uses GCC's C++ frontend on AVRs).
There is also a thread somewhere about how it is possible to format even
float (IEEE 754 Binary32 or Binary64, single or
double precision) exactly correctly (the rounding is annoying!) using very little resources. This is because the
<stdio.h> part of the C standard library has never been optimized for
performance; its output is very carefully specified and tuned for
correctness instead. Similarly, it is very simple to parse limited ranges of floating-point numbers in decimal format (exactly correctly), and at least an order of magnitude faster than what standard C library implementations do; exactly because they are designed per the language spec for correctness and not for speed/throughput. Mostly, it is the subnormal numbers and exponential notation (in cases that ends up requiring a bit-correct division by ten) that are the slowest and most resource-hungry to implement; and even they only require a limited-size conversion workspace (unlike most implementations in standard libraries, which use arbitrary-precision math for this). There are even scientific papers in ACM and elsewhere about such conversions...