As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control.
It always astounds me that it took so long for mainstream software to catch onto that fundamental engineering concept.
It always astounds me that mainstream softies think it is a panacea and that "it works because it passed its unit tests".
Seriously; I've heard that more than once. The standard of the unit tests is usually crap compared to those required in hardware development.
There are far too many software people who think its OK when the only people developing the tests are the people developing the code, because they haven't really figured out WHY you test.
I've never seen anything resembling a standard for unit tests. I had one. It was very simple. All *possible* inputs must be handled by either successful completion or an error code returned to the caller. And the unit test needed to supply *all* possible inputs. I usually used awk scripts to generate the test files.
I've only dealt with three managed software development efforts. All oil company internal developments. The VAX FORTRAN port only had unit tests because the other contractor suggested we do it and I was so far ahead of schedule that I just did it. The other projects only had "testing" as a check box on some Gantt chart. The developers did none and the "testers" knew nothing about what the software was supposed to do, so they just went through the motions. The majority of my time was spent cleaning up the messes that other people had created.
My observation is that very few people are competent programmers. Most scientists can't program, but that doesn't stop them. With sole exception cited above, none of the contractors I worked with were worth a damn.
A character in Shakespeare's Henry VI proposes to kill all the lawyers. As far as I'm concerned we should kill the programmers first and then kill the lawyers.
The sad part is that how to do things properly is well known and has been extensively written about. It's just that most software developers can't be bothered with learning their craft, just the language or framework du jour. I found it amusing when a poll found that over half of "tech workers" felt they were frauds.
I always signed my work and included my acm.org email address so that a future maintainer could contact me if needed. I started doing that on my first contract job, the VAX port because I wrote a 15,000 line library using lex and yacc and I knew that it was highly unlikely that anyone working on the code in the future would know lex and yacc. So I put a comment at the top of the file explaining how to get in touch with me and a standing offer to fix any problems they encountered without charge. That and another 15,000 line library I wrote *never* had a single bug found in it. I did not spend a lot of time developing tests, though I did spend a good bit of time in the evening contemplating how to create them over a glass of scotch or whatever I was drinking at the time.
I also spent a lot of my own time and money buying and reading books. The computer science section of my library consumes 80 ft of shelving. And that doesn't include subject specific books like linear programming solvers. I recently bought a couple of books on the implementation of the Method of Moments. One by Gibson and the classic work by Harrington.as well as the marvelous EM book by Jian-Ming Jin. Why? Well I want to do some CEM work. To be able to write test cases to verify NEC2 or OpenEMS I need to know the details of the calculations so I can craft test cases that are likely to fail.
Sorry about the rant, but I spent many years fixing idiotic errors.
To return to the OP's question, every function or subroutine should have a unit test which tests *all* possible cases. If that cannot be done, the code should be factored into pieces small enough that it can. No work should be accepted as complete without that. And the first milestone in development should be writing the test suite. When a developer is given a task they should be required to show that they know how to test the requirements that they were given. That will actually catch faulty requirements before the bugs are created.
Most of the programmers I've met blame flaws in the requirements on the customer. "Oh, you want wheels on your car? Why didn't you say so? Tires, too? We can do that, but we'll have to slip the schedule a bit." It's the programmer's job to verify the requirements by creating the test suite.
Les Hatton has done a lot of work on software QA and written an excellent book on the subject, "Safer C". I highly recommend it, but I am prejudiced as I've known Les for a over 30 years.