Electronics > FPGA

Heavily configured FPGAs

(1/3) > >>

AussieBruce:
Can anyone give me any information on what happens when an FPGA is ‘heavily populated’, ie. as the number of elements used approaches the total number on the device? Will the IDE be any help, eg. by providing warnings? Are there any guidelines on things like the % utilisation above which trouble might emerge? And are there design procedures that can help extend the loading? (appreciate that this last question might require an entire book – or an entire career - to answer).

ataradov:
The design will either fit into the device  while meeting all the constraints or not. There is no other option.

It depends on the vendor and set strategy, but often tools will optimize just enough to fit the requirements, and will not try to do the best job. As utilization goes up, tools will just have to try harder and harder.

This may create a huge issue, as one day a minor change may no longer make it possible to place the design, even of there are seemingly some resources left. There may not be enough routing resources while plenty of registers left, for example.

There is no direct advance warning of that as far as I know. An indirect warning is that the design all of a sudden takes too long to place.

And the thing that becomes more important as utilization goes up is that your constraints are actually setup correctly.

The general rule of thumb we used is that no more than 80% of any resource (except for pins) must be used. After than a design is either optimized or upgraded to a higher capacity FPGA. But that is for generic logic.  Some designs for DSP were big, but very pipelined and were explicitly optimized for the FPGA architecture (down to manual placement of blocks). In that case 95+% utilization was possible with no issues.

james_s:
The most obvious effect I've observed is that compiling (or fitting or whatever you want to call the process) takes a lot longer when the device is very full. IIRC it can take several times as long as it grinds away trying to optimize the design enough to fit.

hamster_nz:
For higher performance designs, the Fmax will usually also fall due to routing congestion.

Also note that some FPGAs come in different marketing sizes, but they have the same physical die with different device ID fusing.

So sometimes moving up to the next size up in the same range doesn't give you more physical resources and will make zero impact on performance or build times.

SiliconWizard:
Yep. As utilization grows, the PAR (place-and-route) execution time will tend to grow a lot. And yes, Fmax will tend to decrease as well. For a high-utilization design you may need to take steps to direct the PAR process through a number of constraints, which is never trivial.

A recent example is a "small" RISC-V SOC I've implemented on a Lattice ECP5-25 (25kLUTs). Depending on how I configure it, it takes between 60% and 85% of the total LUTs. At around 60% utilization, PAR time is about 4-5 minutes (yeah it's already a moderately complex design). At around 85%, it can take over 30 minutes.

As a rule of thumb, experience has shown me that starting with 75-80% LUTs utilization, on most FPGAs, things start to get hairy.

Navigation

[0] Message Index

[#] Next page

There was an error while thanking
Thanking...
Go to full version