By "debugging information" do you mean all variables are visible?
No. "By debugging information", I mean the code is instrumented for examination using a debugger.
Useful optimization – be it for size or efficiency or other reasons – always involves removal and transformation of expressions, and that means a variable may not exist at all in the final binary. Typical example of this is as follows:
uint_fast32_t distance(int_fast32_t x, int_fast32_t y)
{
int_fast64_t n2 = (int_fast64_t)x * x + (int_fast64_t)y * y;
return uisqrt64(n2);
}
When optimizations are enabled, and especially if
uisqrt64() is a
static function eligible for inlining here, there is no reason to expect
n2 to be observable in the code.
While there are many ways to force the variable to exist when debugging, that is always a tradeoff between code optimization and debugging.
I personally handle this dichotomy by writing any key pieces of code separately, and testing them thoroughly; documenting the test results. For example, when dealing with uni-variate
float functions, I often test the function with all finite inputs, and compare them to the expected results calculated at at least double precision. For multi-variate functions, I test the pathological cases, and a few billion random cases (using high bits of Xorshift64* seeded from
getrandom() on Linux, or from
clock_gettime()/
gettimeofday() multiplying both integral and fractional seconds by large primes and XORing the result together on others).
With this approach, my typical bugs are corner cases I didn't think of, and I get a headpalm and have a fix ready in a minute. I rarely need to use a debugger on my code. (I can, and have, including GDB accessors and helpers written in Python, when necessary; I'm just saying that having variables be accessible to me in a debugger is not important to me.) I do, however, quite often examine the assembly code generated, to see if the writer of the problematic function and the compiler agree as to what it should really do.