No, balancing is about safety in the first place. It was not added for performance. Better performance is only a side effect of balancing.
No, balancing is primarily a performance question - maximization of usable pack capacity during the product lifetime. It does little for safety, although if done right, can offer a extra safety layer when the primary safety layers fail.
You are confusing balancing with monitoring and individual cell-level cutoffs. Safety is brought by monitoring cell voltages individually and cutting charging when any cell of the series string exceeds maximum voltage; and cutting discharging when any cell goes under the minimum voltage.
If you think about it for a while, you will understand how balancing cannot do that: balancing currents are usually much smaller than charge, let alone discharge currents. They cannot fully bypass a cell and prevent it from overcharging or overdischarging.
But don't worry, you are not the only one who confuses these two. It's not surprising, because most of the engineering difficulty is properly measuring cell voltages (level shifting, with minimized quiescent current differences between cells, while maintaining good accuracy). Once you solve that, it is a small addition to add balancing. Nearly every IC which does monitoring also does balancing. And balancing is a catchy word, so people kind of assume that balancing implies having monitoring, cell-level HVC and cell-level LVC.
But once you start saying that it's the balancing which is the important (for safety) part, you have taken that misunderstanding too far, and it needs to be rectified; no, safety comes from cell-level monitoring, and balancing is the obvious extra nearly everyone implements.
Now, the "maybe extra safety layer" I mention is this: if you had a truly redundant way to reliably balance the cells, then if the monitoring system failed, a pack-level HVC (at least in the form of CV mode + stop timer implemented by the charger) and pack-level LVC, especially the former one, have better chances of preventing individual cells from going beyond limits, if those cells are balanced at top, compared to the situation where they are unbalanced. If you think about this deeper though, it is possible that the balancer fails too and unbalances the pack, making the situation worse. And in reality, balancing and monitoring is combined usually so that they would fail together.
Now, if you think that balancing alone, without monitoring of cell voltages - this would mean a distributed balancer with no level-shifting the voltages out to any central decision making - and only use pack-level LVC/HVC, would be a safe enough solution, I would be worried. I mean, balancing needs to be done at either top or bottom, the other end will be unbalanced* because of capacity differences which develop during lifetime. And now you would be adding new sources of error, namely the balancer itself. Every balancer acts as unbalancer because of unmatched quiescent draw. If you combine this with iffy balancing algorithm - e.g. the typical "only balance above certain voltage" - and lack of cell-level monitoring, you create edge cases where unbalance increases and is never rectified and never noticed. For such cheap-assiness, I recommend going completely BMS-less instead. Good initial balancing, smallish pack, good cell quality, good initial cell matching and initial factory balancing. No connections to taps at all, so external unbalancing is impossible. Big names did that for many years without much problems.
* (ignoring very powerful balancers, but that's mostly academic)