There is not much difference [between hardware and software], just other rules. With (current) software you have some kind of linear processing unit with limited parallel capabilites, temporary (RAM) and permanent (flash/harddidsk) storage, with its own sets of rules how it works and interacts. When you write software, you design it on top of these rules and you can test it on computers if it runs.
A key difference is the
order of the complexity, and of emergent behaviour (classic example: which "rule" defining individual grains of sand leads to sand piles being conical with a half-angle of 35 degrees?)
1) In the kind of software system you outline, complexity rises vaguely proportionately to the number of components. There are exceptions, of course, especially in a large system made from many components produced by multiple manufacturers , e.g. banking, telecoms, (i.e. more like a typical hardware system)
2) With hardware-on-a-PCB system or inside an FPGA, however, the complexity rises as a power (min 2!) of the number of components, simply because there is, in general, a much higher "random" interconnectivity between components. There are exceptions, of course, especially where there is a uni-directional dataflow in the architecture, (i.e. more like a simple software system).
(For simplicity, I've ignored the important "abstraction leakages" which make the issues even less tractable. Software examples: cache capacity, page thrashing, NUMA, resource contention. Hardware examples: high clock fanout switching problems, speed of light, analogue effects in nominally digital circuits, component parasitics, non-lumped component behaviour)
But don't worry, other people with lots of money to spend have noticed the same routing problems. So,
if it is a soluble problem, new tools are just around the corner.
But note that caveat!