I stand by my original points.
I really only object to the No True Scotsman fallacy claim.
I'll highlight and comment on a few of your points below...
I understand your points, and fully acknowledge their basis in facts; I only disagree on some of your conclusions.
As an example, consider PHP, a widely used but usually pretty horrible code. Especially its earlier versions were basically a security hole waiting to happen (magic quotes stuff in particular). Yet, one could write quite secure web service code with it, if one paid sufficient attention, and avoided using the features that usually lead to security problems. I know, because I have.
However, many of those security holes have been plugged (like magic quotes no longer supported, database interfaces switching from building query strings to using variable references so quoting is not even an issue, and so on). The most problematic design principle currently is that most PHP services are designed to be able to upgrade themselves, which necessarily means the installation is vulnerable to script drops/bombs et cetera. We could avoid that, and even things like password leaks, if we leveraged the POSIX/Unix user and group hierarchies, with server interpreters refusing to execute code owned by the user that can upload content to the server; and login/logout/account management facilities restricted to a few specific pages with all others not even having access to the sensitive fields of the user database...
We could do better with Python, but unfortunately Python insists on its "own" WSGI interfaces (as opposed to say FastCGI). (In particular, a page engine can be written as a FastCGI script, with each request (connection) served by a forked child. The engine can preload each instance by the main data structures, like navigation and file types supported, deduplicating most of the work done by most page loads.) As a result, typical widely used Python-based web services are vulnerable to similar bugs as PHP ones, on top of its own WSGI ones! No true forward development, just.. steps in odd directions, in my opinion.
This "proves" to me that the current software bugs and insecurity is really not a feature of the respective programming languages, but a consequence of us human developers accepting a software "engineering" culture that has discarded almost all good engineering principles, and is just sticking stuff together with spit and bubblegum, banking on the product working just long enough that they won't be held responsible for the crappiness. And that we should not blame the languages for not trying to stop the developers for implementing idiotic designs.
(I am not sure whether I can even honestly call *current* PHP versions an insecure programming language, anymore. I have in the past, especially due to features like magic quotes that attempts to hand-hold bad developers; but the removal of such has removed my reasons. Granted, I haven't written security-sensitive PHP in almost a decade now, so I am out of touch.)
There is no way to make C and/or C++ a "safe" environment to write software in. It is absolutely 100% impossible.
I agree; but I also think it is not
necessary for a systems programming language to be "safe".
That is, one of the reasons I think C is powerful, is exactly because it does not try to stop developers from shooting themselves in the face with pointers et cetera.
Anyway, it is technically possible to write secure, robust C code. It is just much easier to write horribly buggy and insecure C code. At the core is (at least my observation that) making the language "safe" seems to also make it less powerful, harder to interface to, and/or slower: not exactly acceptable tradeoffs for the kind of situations C is used for.
Can we do better? Absolutely we can! But personally, I'm checking out the cost/benefit ratio of reaching for the low-hanging fruit first, by replacing the standard C library with something better suited for systems programming, to see how much better it would be.
Perfect is the enemy of good.