It is exactly the same with C/C++; you have to create your code in a VM with a specific compiler version and library version, and always use those forevermore. Why? Because new optimisations and flags are prone to breaking programs that were previously OK.
Seeing you "have" to do that using such absolute terms is a sign you are likely working either in very incompetent teams, or doing something out of ordinary with external requirements for such practices. Or just have no idea how the C/C++ software development world actually runs.
After all, I'm quite certain 99% of the "the different compiler version broke my C program" problems are caused by very actual and real bugs in the code, relying on undefined or implementation-defined behavior the programmer didn't understand or even consider when writing it. Which is completely understandable, we all make mistakes, but the key to reducing mistakes is to look for the root cause and learn from it, not to hide it by preventing the detection of said bug.
Portable (across compilers and exact standard library versions; even across CPU architectures) POSIX C code is being written and maintained every day. Bugs that are being hidden with one (standard-compliant) compiler, then revealed with another (standard-compliant) compiler do happen, we are humans after all, but they are almost meaningless, minuscule portion of total bugs the developers need to deal with, and definitely no absolute showstoppers. Even bugs caused by a
buggy compiler do happen, although much more rarely than an average developer wants to believe.
Yes, I agree such total version lockdowns and virtual machines to prevent such breakage are
sometimes needed, but that is really a very sucky way to do it, and claiming that you absolutely
need to lock to exact versions
forever only shows you have no idea what you are talking about, and are just screaming "I can't do this, I give up!" Extraordinary claims require extraordinary evidence; clearly most of the world being run on C code being compiled on varying compiler and C standard library versions is "the wrong way to do it". Yet this is how it works, and majority of bugs that cause actual damage could not have been prevented by locking down to specific tool versions.
Bugs will
always happen, and sidestepping the issues caused by different compiler versions will only help with those types of bugs, which are a stunningly small percentage of total bugs. Quite the opposite; if you test your software (using proper automated test suites, yeah?) in different environments, with different compilers, you are likely to catch
more bugs, instead of hiding them by limiting to one compiler.
After all, porting to a new environment, or reusing part of your code in another project, compiled on a different compiler, will likely eventually happen. Your code is
worth more when it's portable, reusable, and relatively bug-free, not some write-once kludge for one forever-locked environment.
And think about this: what if you have a difficult-to-reproduce bug that appears with 1% probability with compiler A, but with 100% probability with compiler B? Was locking down to compiler A the right thing? Was this choice really adding robustness to your project, or was it just an excuse to keep producing buggy code?