It sounds silly but in some cases it makes sense to stick with the devil you know. Any time you fix a bug it very often creates new bugs.
Very true, to a point. The problem is, there usually comes a point in any long-lived system where (usually triggered by external factors) you
have to change something, and at that point it's quite possible you're faced with a massive cascade of changes that you're not at all prepared to handle or test.
Let's say you've got an internal web-based app that uses a certain weird feature of Internet Explorer 4.x and your IT department comes along and finally says, "As of next year we will no longer be able to supply your users with desktop systems that can run IE4." Well, you can easily replace the feature you used with a different feature from a modern web browser, but the developer now needs a newer version of a library, which triggers needing new versions of three other libraries, which in turn requires a compiler upgrade, and so on and so forth. Given that the system hardly ever changes and it's been massively tested in production, it's unlikely you've invested in a comprehensive test and validation suite for your code, so now you have a real problem and you're likely to kick out a release that could take years to settle down and get back to being as stable as it was before.
Dealing with this is part of the core of "agile development:" rather than putting off changes to collect them together and do them as one huge change, instead aggresively integrate changes as soon as possible, and design your system to handle this. The catchphrase for this is,
"If it hurts, do it more often."If you're having to frequently test a system, the most cost-effective way to do that is to automate as much as possible of it. And the most cost-effective way to automate testing is (as all hardware engineers know) to design the system to be easily tested by automated tools. That's usually a stumbling block with legacy systems; adding comprehensive automated tests can be even more costly than a full rewrite.
But once you've got that in place, dribbling in frequent changes works a lot better than trying to do all the changes at once because you find out about problems earlier and you've got a much, much smaller problem to solve when one comes up because the extent of each change is much more limited.
This also allows you to keep the software itself less complex, smaller and easier to maintain and modify because it allows you easily to do things like changing internal APIs and their behaviour to simplify things even when that change touches things throughout the system.
This isn't to say that existing or even new projects can do this easily, or even at all. The whole concept of "Agile" and embracing continuous change really started coming to popular attention less than twenty years ago, and at this point we've only just passed through the stage of most developers and managers saying, "that's crazy and can never work." Actually building systems this way requires both developers and managers to have a fair amount of expertise in this area and there has to be good co-operation between them. Only a relatively small minority of each have this expertise at a level where they could use it in greenfield projects, and fewer yet the expertise and skill to move existing large projects to agile.