That's just false. The fact that you misunderstand this is a pretty good argument for the behavior being bad. It is 100% possible to have code segfault on a null pointer exception because of a compiler removed null pointer check.
The key is that a null pointer dereference *may* cause a segfault but it isn't guaranteed to. For instance, if the first dereference itself is optimized away for having no observable side effects. Just doing pointer arithmetic on a null pointer is undefined even if you never derference it. Even though there is no illegal memory access and no segfault, it still is undefined behavior and the compiler can still assume it doesn't happen and use that optimize away future null checks.
Of course the code is still by definition wrong. People aren't really upset about the segfault, they are upset about where the segfault happens: not in the code that actually has the bug but somewhere totally unrelated that has an explicit null pointer test.
OK, admittedly, they didn't talk specifically only about dereferences causing DCE, and also, of course, C doesn't guarantee a segfault in any case, even if you do dereference a NULL pointer ... but their posts here just have this strong "compiler writers are dumb because their compiler doesn't compile my code into what I've made up it should mean and instead strive to make code fail where the standard allows them to" vibe, which really isn't helpful for anything, and which also tends to be so vague, bordering on non-falsifiable, that you can't really address it fully in a reasonably short answer, but letting it stand as supposed wisdom isn't really helpful either.
The problem is it's not "your" code, it's someone else's code. If your code receives a pointer from "somewhere" that the compiler thinks it can prove is non-null, but the pointer is in fact null, that's where you have a bad day. And as compilers get more powerful they see more and more of the program and can make more inferences, especially with LTO. So it gets harder and harder to see where there might be a null pointer dereference.
The solution to this is not to tell people they are wrong or call it nonsense, that only means you don't understand the issue. The only real (if partial) solution to this is ubsan tools that can find the undefined behavior where it occurs, rather than where the program fails. It's still hard to find bugs that only show up in rarely production, but ubsan has advantages over other tools in that it can find behavior that is undefined even in situations where it is apparently benign.
Which is all true ... but the important point that needs to be pointed out to counter these compiler conspiracy myths is that compilers do all that in order to improve performance, not in order to make code malfunction, or just because the standard allows it.
Compilers try to infer as much as possible about a program in order to find the most efficient way to express it in machine code, and finding checks that, according to the definition of the language, can not be true, and eliminating the code that expresses such checks, is done because that, obviously, improves performance. And not only does it improve performance, but rather, it allows for more reliable/more secure code, because it allows you to add more sanity checks in your code (like, say, bounds checks) and to rely on the compiler to prove where they are unnecessary so that the performance impact of those checks is minimal. So, it's a *good* thing that a modern compiler with LTO might be able to eliminate unnecessary bounds checks across compilation units, say, where a human might have a hard time figuring out why exactly the check is unnecessary.
Now, all of this *sometimes* causes code to be eliminated that actually might prevent bad things from happening if it were kept. Which is unfortunate, and sometimes there might even be good reasons to make the compiler selectively less aggressive, but the important point is that there are usually babies in the bath water, and and no easy solution that reliably gets rid of just the water.