Besides that it is also more work (=more code to write) to use the library components.
a =b*d+c is much easier to write and the synthesizer will come up with the best solution.
Using library components in HDL is a remnant of schematic entry and should be avoided as well.
I think it is better to play at every level for a while, and see what each offers.
Schematic entry sucks for describing fine details, but at a top level, where you connect three or four major blocks (e.g. at the Vivado IP level) it sort-of works better than huge HDL modules that just wire components together.
CoreGen style IP blocks allow you to get complex things that work correctly with minimal effort - for example, If you want a dual-clock FIFO block I would suggest you use an IP block over inferring your own. This is where Intel/Altera have the edge on Xilinx with their MegaFunctions, which allow you to include common paramaterised IP in your design without the churn required to run IP wizards every time you want to change a single setting.
I feel you are silly to use IP blocks for things that are almost primatives (e.g. MMCMs and PLLs) because using the HDL primitives is much less effort once you climb the learning curve of your first design.
So back to inferring "a =b*d+c"... the problem I see with jumping straight to this is you don't see the constraints that are being pushed up from the FPGA architecture itself. What options do you have?
How big should 'b', 'c' and 'd' be to get the best result? Is it better if b and d are 16, 17 or 18 bits? Does 'b' or 'd' being 'signed' or 'unsigned' make a difference? How many cycles of latency is best for optimal performance? How many cycles of latency is needed for minimal resource usage? How many bits is too big for 'a' or 'c' for efficient implementation?
Likewise, for RAM, what sizes are good to use? Should you make 64kB asynchronous memory blocks, or should you have a cycle of latency? is a 10-bit x 1k entry clocked RAM block a good idea? how about a 10-bit x 2k entry RAM block?
All of these have an (more or less) absolute answer, but if you stick to just using inference and trust the tools you will just get what you ask for, not what is best. Unless you have awareness/experience/understanding of the low level details you will usually end up with a sub-optimal result - and putting the effort in to implement a design in FPGA isn't about getting a sub-optimal result.
However, once you know the limitations of the underlying architecture, you can write HDL that naturally maps onto the FPGA as if by magic. memories fit efficiently into RAM blocks with no waste, math fits into DSP blocks and runs fast, shift registers fits into a LUTs and doesn't eat huge numbers of flip-flops, and so on.
If you target multiple FPGAs, you want to put a wrapper around anything that is sensitive to the FPGA architecture, isolating it from the rest of the design and allow you to replace it as needed - but that isn't a beginner's issue.