That is obviously a defective design/part.
Given that, I am not convinced there is any sense to a "replace after X years" for mass market computer hardware like this. A battery fault like that does take some time to develop but with a faulty design it could happen as well after 6 months as 5 years. There are a few other examples of this, most notably "capacitor plague", but most of those capacitors have likely failed out already.
Beyond built-in defects like that, I don't think there is a sufficient reason to think that a 3 or 5 year old computer is enough more likely to fail than a 1 year old computer to make it worth replacing. Remember that replacing costs significant capital cost, labor, risk of downtime, and more exposure to the leading edge of the 'bathtub curve' of early failure, or even buying into a new design flaw like this or a bad batch of electrolytic capacitors. Computers should be replaced when they fail or when they are obsolete. We have basically passed the point where computers (at least 1-2 socket servers) become obsolete. Newer computers are faster/$ and more energy efficient than the previous generation, but not enough that it makes sense to throw the old ones out rather than keep them running.
There is a completely separate issue that your operations has to be resilient against failure of a computer. But replacing the hardware doesn't really get that for you. At best, you would get the same effect by deliberately formating the drive every year, and practicing your restore from scratch. No need to buy new hardware for that.