In my experience, the biggest problem with a near-fully utilised FPGA isn't so much technical as commercial.
On day one, you design a board with an FPGA that's a good fit for your design, including the features that are needed with a reasonable - but not excessive - amount of free space for future development. FPGAs are expensive, of course, so you pick the device you need, not one which is arbitrarily bigger. All is well.
Over time, features get added. The FPGA utilisation increases. "Can you just add...?" takes its toll.
If the design has a long lifetime - which a good, successful one will - you end up maintaining a device that has a much higher utilisation that you might like. Changes take longer and longer, as each one pushes the device's internal Fmax below what it needs to be. Each change you need to make has unintended consequences elsewhere, as you end up having to optimise whatever logic is now running a little too slowly. The risk of bugs creeping into code that was previously tried and tested, and which should have remained completely untouched, rises exponentially.
You can't now specify a bigger device, because the new software must always run on existing hardware.