With huge resources available with FPGA I guess TTM is more important than efficiency...
That's a reasoning you can't hold in general, it will all depend on all the constraints, and of course will hold only when the development costs overweigh the parts cost, which in turn all depends on your market and the number of products you'll sell...
Those tools are more aimed at the large FPGAs, which tend to still be pretty expensive. Wasting resources on CPU-based designs these days is common as this has become very low cost, but on FPGAs, not so much. If you have to go for a much larger FPGA to leverage those new tools, this sometimes can mean the cost of the part will be 2x, 3x or more what it could have been if used more efficiently. Then beyond cost, there's also the question of power consumption. Wasting resources using large FPGAs can mean much higher power consumption...
I for one think those "new" approaches are mainly useful for the typical case of designing "accelerators", basically translating software algorithms so they can run very efficiently (in terms of processing power) on FPGAs. Sure this use case is an increasing market for FPGAs, but still far from being the only one.
Finally, as I say on a regular basis, there's life beyond FPGAs for HDLs. If you have to design an IC or port an FPGA-based design to an ASIC. If you're using those vendor-locked FPGA tools, there's a big chance your design won't be reusable in the least, either for just switching to a different FPGA vendor, or for porting it to an ASIC. Those tools lock you in.
As a corollary, that also means that as an engineer, skills you acquire using those tools will be much less "portable" than mastering general-purpose HDLs.