I don't know that speed is actually the issue.
Speed is usually an issue. Poorly constrained implementations lead to "trivial" changes drastically changing the maximum speed.
I don't find the tools to be the problem with speed. Speed problems usually come from either poorly written code, meaning it expands into large logic, or and overly crowded chip that prevents good placement and efficient routing.
You misunderstand. I've seen a design with, IIRC, around 40% occupancy, mostly datapath with a relatively small amount of control logic. A trivial change in the control logic caused a factor of 4 change in the maximum clock rate. Nailing down the data path reduced the clock rate change to essentially zero.
What does "nailing down the data path" mean exactly?
Do you understand what caused the change in clock rate?
Go figure.
In some cases I've had to resort to manually nailing down blocks so that the placement can give remotely useful results.
I always consider floorplanning when starting a design. It goes with the territory of defining clock domains before starting implementation.
I'm not sure what that means. Floorplanning is something you do to constrain placement. It has no meaning before you have a design. If your designs are so difficult to meet timing, that you have to plan the placement before you do the design, then you are in the 1% of designs that are pushing the chips to their limits.
I disagree. Perhaps my designs are pushing the limits harder than yours.
Ok, so you have more designs in the 1% then.
I don't normally have designs like that.
"Thinking in procedural languages" when designing software systems leads to fragile distributed/networked software applications. People ignore that their design could fail because of a computer you didn't know existed and which is under the control of another company.
Thinking in procedural languages when designing FPGAs leads to fragile hardware implementations where I/O has multiple different clocks. People that ignore that their design could fail due to metastability problems even in plesiochronous systems where the clock rates are nominally the same.
I think you are carrying that water too far and your bucket has a leak.
Unhelpful analogy which gives no insight. I've seen the effects I've described.
Ok, then can you provide some examples of "thinking in procedural languages"? I've never seen anyone code an entire design in sequential code such as processes, although that's perfectly reasonable too. I don't know of any designs that can't be done in sequential code. So I'm thinking I not following what you mean by the term above.
You literally can't infer special purpose blocks in the HDL typically. Clock multiplexers are a good example. I've never seen an HDL inference that would work in place of an instantiation. SERDES is just one of those blocks of logic that can't be inferred, no?
Yes, as I said.
They have to be instantiated, and that doesn't fit well into the mentality of using a procedural language.
Whatever that means. My point is there are times when instantiation is required. Otherwise, I use inference which works just fine. I typically have some idea of what logic and in particular, registers will be inferred. It's not my goal to dominate the tool and use inference as a substitute for instantiation, meaning, controlling the inference so tightly that I get exactly what I want. The point is to let the tool do the walking!
I have no idea what you mean by "dominate the tool".
Inference and instantiation are orthogonal. You can (and do) have inference with and without instantiation.
Software programmers, particularly when using C and higher optimisation levels, often have to look at the asm code to see if there are any "unexpected" effects (that's why the Linux kernel uses -O2 not -O3). Similarly it is often beneficial (and sometimes necessary) to look at what the synthesis and placement has done with your design.
I've not seen software people dig into the asm code routinely unless they HAD a problem to solve. No one codes in C caring about the asm, mostly, because it is hard to understand after the tool is done optimizing it. Often it is much more efficient than what is hand coded.
I look at the synthesis result on small blocks that I wish to optimize or when learning to use a new tool. If this is done routinely, like with writing C code, it becomes very time wasting. I use the same coding styles over and over, so once I've learned a tool, I don't need to keep looking over my shoulder to check that the result is effective.
Here's an analogy. I got a kayak and proceeded to kayak for some weeks or even months before learning from others. I found I was gripping the paddle too tightly, trying to control every aspect of its motion. I learned that the paddle finds its own center and a relaxed grip is much less tiring. I code that way as well, relaxed, using the style that works for the tool I'm using. I don't need to keep checking it unless there's a problem.
Like with the kayak paddle, if you are getting poor results, you probably have a poor style for that tool. Rather than fix things with instantiation (gripping the paddle tightly) it is better to learn how to code properly for the tool (a relaxed grip with an understanding of how the paddle should be used). I find it much less tiring to work with the tools, rather than to beat them into submission with excessive use of instantiation.
Note those benefits are the same as the benefits for wiring ICs together
As opposed to writing HDL and using programmable logic?
I was discussing principle. Using pre-existing blocks, whether IPs or ICs, is conceptually very similar. The size/complexity hidden within the blocks changes over time, but the principle remains unchanged.
So you are talking modularity? Yes, but even with that, my designs use most modules exactly once. It's more a way of decomposing the design to give it structure and an means of testing units rather than the whole. Instantiation is seldom about component reuse because most components don't get reused. Pretty not much like wiring ICs together where you use the same ICs over and over.
We are discussing several effects, in this case modularity.
Whatever the modularity, whether or not you are instantiating one or more instances of a component is irrelevant.
I'm not sure what you are discussing. Modularity in HDL is done with instantiation mostly. Some people use procedures, but I find that limiting.
Modularity is to facilitate understanding the design, as well as testing and decision hiding. I don't use it to control layout, mostly because I seldom need to control layout. Controlling layout to achieve timing, is under the heading of optimization which is to be avoided if possible. I certainly don't do it routinely.