Author Topic: Hitting a breakpoint after n times!  (Read 4389 times)

0 Members and 1 Guest are viewing this topic.

Offline ali_asadzadehTopic starter

  • Super Contributor
  • ***
  • Posts: 1904
  • Country: ca
Hitting a breakpoint after n times!
« on: July 12, 2021, 10:00:47 am »
Hi,
I want to know how we can set a break point in modlesim to be triggered after for example 100 hits? If I set some break points, every time the code reaches there it would stop for now as expected.
Thanks
ASiDesigner, Stands for Application specific intelligent devices
I'm a Digital Expert from 8-bits to 64-bits
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
Re: Hitting a breakpoint after n times!
« Reply #1 on: July 12, 2021, 03:49:55 pm »
Something like the opposite of this:

Code: [Select]
always @(posedge CLK_IN)   WDT_COUNTER = (SEQ_BUSY_t!=SEQ_CMD_ENA_t) ? WDT_RESET_TIME : (WDT_COUNTER-1'b1) ;   // Setup a simulation inactivity watchdog countdown timer.
always @(posedge CLK_IN) if (WDT_COUNTER==0) begin
                                             Script_CMD  = "*** WDT_STOP ***" ;
                                             $stop;                                           // Automatically stop the simulation if the inactivity timer reaches 0.
                                             end

Once stopped, you may 'run -all' in the transcript to continue.
Also, in the second if, you may reset the WDT_*** to a preset value so it will stop again automatically after that new number of iterations.
« Last Edit: July 12, 2021, 03:52:45 pm by BrianHG »
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14465
  • Country: fr
Re: Hitting a breakpoint after n times!
« Reply #2 on: July 12, 2021, 06:50:26 pm »
I would implement this in the test bench HDL as Brian suggested.

And that said, the idea of setting "breakpoints" for HDL simulation sounds pretty odd to me. Everything can be done much better, more efficiently and in a more reproductible way using HDL.

 

Offline ali_asadzadehTopic starter

  • Super Contributor
  • ***
  • Posts: 1904
  • Country: ca
Re: Hitting a breakpoint after n times!
« Reply #3 on: July 13, 2021, 05:56:48 am »
Thanks BrianHG, I have designed a DSP engine, It needs quite some clocks to spit the results, But some of them are wrong, so I needed a way to break after some n times doing the loop; it's odd that they do not have provided an easy way of doing it in Modelsim, since it can be easily done in ARM toolchains like keil :palm:
ASiDesigner, Stands for Application specific intelligent devices
I'm a Digital Expert from 8-bits to 64-bits
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4951
  • Country: si
Re: Hitting a breakpoint after n times!
« Reply #4 on: July 13, 2021, 06:19:30 am »
Thanks BrianHG, I have designed a DSP engine, It needs quite some clocks to spit the results, But some of them are wrong, so I needed a way to break after some n times doing the loop; it's odd that they do not have provided an easy way of doing it in Modelsim, since it can be easily done in ARM toolchains like keil :palm:

An FPGA is not a ARM CPU. They are an entirely different thing so they need entirely different debugging methods.

This is a excellent use case for testbenches. Generating the test input signals to the DUT is not the only job a testbench can do. The same test bench can also verify the results and stop the simulation when it finds a wrong result. This way you can just let it run for hours on end, pumping test cases trough the DUT, yet it will halt on a problem to let you examine what happened and write down the inputs that caused it, letting you rerun the simulation with those inputs to reliably reproduce it.

If your point is to just send a single operation cycle tough the DSP and then check results then you can just write a testbench that monitors the output for some ready condition and then print the results into the console.

For example i was working on a CAN controller in a FPGA and to make testing easier i wrote a simple CAN packet parser inside the testbench that would decode the packets and print them out to the console or on a bus. This essentially gave it the functionality of a serial decode feature on a scope, yet it could be programed for any protocol, not just some list of supported protocols.

Testbenches are there to let you build your own tools for exercising and verifying the DUT. This is why simulators don't provide them out of the box. They can't possibly cover all use cases with prebuilt tools so they instead give you a tool to build your own tools that debug your code exactly the way you like.
 
The following users thanked this post: Someone, Bassman59

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14465
  • Country: fr
Re: Hitting a breakpoint after n times!
« Reply #5 on: July 13, 2021, 04:10:54 pm »
An FPGA is not a ARM CPU. They are an entirely different thing so they need entirely different debugging methods.
This is a excellent use case for testbenches.

Yep, that was my point too.

I'll add something though. One feature that can be very useful - and that not all simulators have - is the possibility to start saving simulation data only after a certain 'trigger' - that ideally you can define in the testbench HDL. Many simulators will just save simulation data from start to end, and all you can do is define an end time (and if you're lucky, a start time to start saving data.) But if you don't know in advance when the event of interest will happen, and your design is large, it can yield gigantic files.

Haven't used Modelsim enough to know that, but if you can have it save data using 'triggers', that would be very useful. Ideally, that would work a bit like a logic analyzer: the simulator would just record the past simulation data for a given amount of time in a circular buffer - and then would start saving to file this buffer, and everything after that, starting when a certain event is triggered. Hope I'm clear. If this is possible with Modelsim, that'd be nice and if someone has any link where it's explained, even better!
« Last Edit: July 13, 2021, 04:12:33 pm by SiliconWizard »
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
Re: Hitting a breakpoint after n times!
« Reply #6 on: July 13, 2021, 04:16:35 pm »
You can pretty much do anything in ModelSim with a testbench source file as you can do in a C programming language.

I've made ModelSim even generate a .BMP picture files, input read and save ascii database files to and from IO ports as well as print and input data from the transcript console.

A lot of which can be done from 1 or 2 lines of code, though I like to do things the extended hard way.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14465
  • Country: fr
Re: Hitting a breakpoint after n times!
« Reply #7 on: July 13, 2021, 04:31:52 pm »
You can pretty much do anything in ModelSim with a testbench source file as you can do in a C programming language.

Of course. My point was specifically about *simulation data* (all signals of your design), that can yield extremely large files. Not files your testbench would itself create.

If you don't know when a certain event will happen - but that may take millions of clock cycles - and everything is saved to file while simulating, which is what happens by default, that could make pinpointing an issue completely unpractical. So being able to only save around a given event, just like a logic analyzer could do, would be pretty useful in some cases.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4951
  • Country: si
Re: Hitting a breakpoint after n times!
« Reply #8 on: July 14, 2021, 05:40:16 am »
I never used such a feature to start saving data at a point.

The simulator only saves signals that are monitored by something anyway, and i suppose i never worked on designs complex enough to need such long simulations. But i think you can do this in a lot of simulators using tcl script automation where you might tell the simulator to run for X number of miliseconds, then hook up the signal monitoring trough tcl, then run again for X number of microseconds.

I hate TCL as a language, but this programmatic setting up of the simulation was something that Aldec Active HDL liked to do. It had no way of saving a waveform timing diagram setup as a file, but it instead has a button that generates code for setting up the waveform window you currently have open. Feels a bit clunky, but it probably helps with large complex simulations since running the simulation can dynamically set up everything.

None the less all these FPGA tools feel like they are still stuck in the 80s and 90s, especially compared to IDEs for ARM. Tho i suppose the Xilinx IDE is fairly modern looking.
 

Offline ali_asadzadehTopic starter

  • Super Contributor
  • ***
  • Posts: 1904
  • Country: ca
Re: Hitting a breakpoint after n times!
« Reply #9 on: July 14, 2021, 06:53:30 am »
Sure I have written the Test bench to test the DSP engine, though the problem with this approach is that the test bench may Have bugs in it too :palm:
So having many tools in your pocket would help way much better.
ASiDesigner, Stands for Application specific intelligent devices
I'm a Digital Expert from 8-bits to 64-bits
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4951
  • Country: si
Re: Hitting a breakpoint after n times!
« Reply #10 on: July 14, 2021, 07:46:02 am »
Yes this same approach also helps you catch and debug testbench bugs.

If you set up your testbench to get upset when the results don't match (pause the simulation, print errors, raise a error signal etc..) then it will catch both testbench bugs and DUT bugs. Its just your job to unwind the simulation logs to see where it went wrong.

If you need extra breakpoints to them step trough the operation of your DUT you can just create "trigger generator" circuits inside the testbench that does something to stop the simulation. These trigger circuits can have parameters stored inside registers, so you can place a value inside a register using the debugger, then let it count that value down to 0 and halt the simulation.

If you are debugging something that is a softcore then the simulator also can't reasonably provide you debug tools because it has no idea of your particular softcores architecture, it could be anything. In that case you would probably build something like a live disassembler in the testbench, letting you see program execution in a neat readable way. Perhaps also add some debug logic that lets you halt the simulation when the softcore hits a certain addess location.

In any case FPGA development is not easy, no magical tool will make it as easy and simple as writing C code for a ARM core.
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Re: Hitting a breakpoint after n times!
« Reply #11 on: July 14, 2021, 07:52:40 am »
I believe the joke is that the Vivado ILA has at least two different ways to actually do this, so its not some massive missing feature of the tools but a user with no imagination.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14465
  • Country: fr
Re: Hitting a breakpoint after n times!
« Reply #12 on: July 14, 2021, 05:16:40 pm »
I never used such a feature to start saving data at a point.

The simulator only saves signals that are monitored by something anyway, and i suppose i never worked on designs complex enough to need such long simulations.

You have not, but some people have. :)
If you're working on a moderately complex soft core, for instance, and have to debug a certain issue that only manifests itself after millions of cycles executing particular code, then it can be a real PITA. Given simulator's limitations, you usually have to either try and localize when the issue happens in time, and record data only around this point (if you manage to at least approximately locate it), or make guesses about the potential issue from what happens and "instrument" your HDL accordingly until you can pinpoint it... it can be extremely time-consuming.

But i think you can do this in a lot of simulators using tcl script automation where you might tell the simulator to run for X number of miliseconds, then hook up the signal monitoring trough tcl, then run again for X number of microseconds.

Simulators that support automation, which is not all...
But even so. (And I'm sure you can, as I said, and as you mentioned, define start and end times for saving data in Modelsim.) The problem is, if you don't know WHEN the event of interest happens approximately, it won't be effective. This situation definitely happens.

So as I suggested, a nice feature would be to be able to 'trigger' data saving according to some signal change (for instance) in your design (said signal could be built from as complex logic as you want for detecting a particular condition, so the simulator wouldn't need itself to implement a complex triggering system such as those you can find in good logic analyzers.)

Maybe Modelsim can actually be automated this way using a signal change as a trigger for automation, and if so, that would be what I'm after... well, almost. But not quite. As I mentioned, the ideal would be a mechanism similar to triggers in scopes and logic analyzers: thus being able to save data a certain amount of time *before* and *after* a given event. If all you could do is just start saving data upon a certain event, then you'd lose what happened just before the event occured. Granted, in this case, you can always resimulate starting at a time a bit before, in several passes. Not as efficient. So, all in all, what i'd like is something similar to what a scope can do with triggering.

If this is at all possible with Modelsim (which I haven't used in ages), then all good - and I'd be curious to know how to set this up. If not, then I think it's dearly missing.
« Last Edit: July 14, 2021, 05:18:50 pm by SiliconWizard »
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4951
  • Country: si
Re: Hitting a breakpoint after n times!
« Reply #13 on: July 15, 2021, 05:45:53 am »
That is a good idea to have a scope like pre/post trigger waveform capture on trigger.

But indeed i have never seen any simulator do that sort of thing (or i never found how to set it up). All of the soft embedded on fabric generated logic analyzers used for quickly debugging FPGAs over JTAG work this way of waiting for an edge on any of the inputs and recording a number of pre and post trigger samples. Having something similar in simulation would be a nice feature.

I guess the people making the simulation software just shrugged off that PCs have lots of memory and disk space anyway. Then again for a large softcore, its possible that most of the CPU time goes towards simulating such a large pile of logic rather than towards saving to disk (especially with modern SSDs) so maybe not recording the boring part might not actually speed it up all that much. Most of the simple stuff i tended to work on would simulate without any significant waiting for the run to complete on a modern machine. (Unlike the damn compile times for a full FPGA bitstream. Those are so slow it sucks the life out of you in terms of test cycle time)
« Last Edit: July 15, 2021, 05:47:41 am by Berni »
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14465
  • Country: fr
Re: Hitting a breakpoint after n times!
« Reply #14 on: July 15, 2021, 06:36:19 pm »
I have little hope that commercial simulators will get that any time soon - it would have been there already. But I do use GHDL quite a bit, and I'm going to suggest this feature.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4951
  • Country: si
Re: Hitting a breakpoint after n times!
« Reply #15 on: July 16, 2021, 05:18:30 am »
I have little hope that commercial simulators will get that any time soon - it would have been there already. But I do use GHDL quite a bit, and I'm going to suggest this feature.

Interesting simulator this GHDL. They seam to be pretty proud of how fast it is. Any idea how much faster it actually is compared to the usual commercial tools (that come bundled with the vendors IDEs and such)?
 

Offline emece67

  • Frequent Contributor
  • **
  • !
  • Posts: 614
  • Country: 00
Re: Hitting a breakpoint after n times!
« Reply #16 on: July 16, 2021, 12:29:15 pm »
.
« Last Edit: August 19, 2022, 04:32:16 pm by emece67 »
 

Offline Bassman59

  • Super Contributor
  • ***
  • Posts: 2501
  • Country: us
  • Yes, I do this for a living
Re: Hitting a breakpoint after n times!
« Reply #17 on: July 16, 2021, 06:24:12 pm »
I have little hope that commercial simulators will get that any time soon - it would have been there already. But I do use GHDL quite a bit, and I'm going to suggest this feature.

Interesting simulator this GHDL. They seam to be pretty proud of how fast it is. Any idea how much faster it actually is compared to the usual commercial tools (that come bundled with the vendors IDEs and such)?

I honestly cannot tell whether it's "faster" than the free ModelSim ME as provided by MicroSemi. I suppose that if a design was large enough to trigger the intentional slow-down in the free ModelSim then yes, ghdl would be obviously faster.

There are four reasons why I don't use ghdl:

1. It doesn't support mixed-language simulation. For whatever reason, many vendors do not provide VHDL models of their parts, only Verilog, so without mixed-language support I can't verify a design against a vendor model.

2. FPGA primitive models are still mostly written in ancient dialects of VHDL -- mostly VHDL 93 -- and ghdl doesn't allow mixing code analyzed as VHDL-2008 with previous versions. Synplify and Vivado have good support for VHDL-2008, and the features added to the language in that revision are quite useful, so I write all of my code to VHDL-2008 standards. But this means I can't use the vendor-supplied primitives models with ghdl.

3. ghdl would really benefit from a nice GUI-based project manager. Makefiles "work," but that's a lot of manual fiddling. ModelSim's, er, uh, model (or paradigm) works well: sources are listed, "simulation configurations" are listed, and it just works. I actually started to write a macOS program to do this sort of management but I got bogged down in all of that and I realized I had better things to do.

4. The ghdl maintainers have tagged a release 1.0.0 but do not provide builds, and for the life of me I can't get the thing to build.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
Re: Hitting a breakpoint after n times!
« Reply #18 on: July 16, 2021, 07:59:24 pm »

I honestly cannot tell whether it's "faster" than the free ModelSim ME as provided by MicroSemi. I suppose that if a design was large enough to trigger the intentional slow-down in the free ModelSim then yes, ghdl would be obviously faster.

Funny, ModelSim's intentional slow-down only seems to be triggered when I launch a sim from Quartus.

There are 2 easy bypasses.
1.  After I launch a sim from Quartus to Modelsim, I click stop.  Then 'restart -all' and 'run -all' in the transcript (or onscreen buttons) and Model sim will run my sim full speed.  Something like 5x faster.  I guess this 5x figure may depend on design complexity.

2.  I usually just don't bother doing my ModemSim work through Quartus.  In fact, I do all my development work in Modelsim from scratch.  I leaned the transcript commands and have my own 'do xxx.do' script files to compile and run my sims.  This method seems to never trigger Modelsim's slow down and re-compiles are usually near instant so long as you just 'vlog' when necessary VS the occasional need to re 'vsim' which are all usually setup in my script files anyways.  As for Altera libs, after running a sim once from Quartus, I just look at the -L library names added on the 'vsim' command line and add those into my compile script's 'vsim' line and I get access to Altera's megafunctions.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14465
  • Country: fr
Re: Hitting a breakpoint after n times!
« Reply #19 on: July 16, 2021, 08:52:59 pm »
I have little hope that commercial simulators will get that any time soon - it would have been there already. But I do use GHDL quite a bit, and I'm going to suggest this feature.

Interesting simulator this GHDL. They seam to be pretty proud of how fast it is. Any idea how much faster it actually is compared to the usual commercial tools (that come bundled with the vendors IDEs and such)?

I honestly cannot tell whether it's "faster" than the free ModelSim ME as provided by MicroSemi. I suppose that if a design was large enough to trigger the intentional slow-down in the free ModelSim then yes, ghdl would be obviously faster.

GHDL supports two modes. When it's said to be faster than most other simulators, it's for one of its modes: it can build an executable from a given VHDL design, and simulate it by running the executable. As it creates optimized code for a given simulation, it's much faster than conventional simulation. Another benefit of this mode is that, as it creates an executable using an ad-hoc compiler based on GCC, you can also perform code coverage analysis using 'gcov'.

GHDL also supports another mode: mcode, which simulates without building an executable first. It's significantly slower, and usually even slower than many commercial simulators which are multi-threaded. GHDL isn't.

I still mostly use the mcode mode because it's the most convenient one. But if I ever need to simulate very large designs for a very large number of cycles, then I can use the other mode.
The mcode mode is also much easier and faster to build. (If you want to build GHDL yourself.)

There are four reasons why I don't use ghdl:

1. It doesn't support mixed-language simulation. For whatever reason, many vendors do not provide VHDL models of their parts, only Verilog, so without mixed-language support I can't verify a design against a vendor model.

2. FPGA primitive models are still mostly written in ancient dialects of VHDL -- mostly VHDL 93 -- and ghdl doesn't allow mixing code analyzed as VHDL-2008 with previous versions. Synplify and Vivado have good support for VHDL-2008, and the features added to the language in that revision are quite useful, so I write all of my code to VHDL-2008 standards. But this means I can't use the vendor-supplied primitives models with ghdl.

Obviously, if you need Verilog or SV support, GHDL is not for you.

As to mixing VHDL standards, I don't think it's much of a problem. True that you can't mix standard versions, but, normally, if you enable VHDL-2008 support in GHDL, it shouldn't have issues compiling code written for older VHDL revisions. I have never run into an issue with this. If you have, could you give me an example of, for instance, VHDL-93 code that wouldn't pass using VHDL-2008 support?

3. ghdl would really benefit from a nice GUI-based project manager. Makefiles "work," but that's a lot of manual fiddling. ModelSim's, er, uh, model (or paradigm) works well: sources are listed, "simulation configurations" are listed, and it just works. I actually started to write a macOS program to do this sort of management but I got bogged down in all of that and I realized I had better things to do.

I personally don't care about having a GUI for this. But, if I'm not mistaken, Modelsim is also command-line based under the hood, and some people only use it on the command-line?
Point is, writing a GUI for GHDL shouldn't be much of a problem. Not sure there is enough interest for that out there though. Once you've written a couple Makefiles (or scripts, you don't even need a full-blown Makefile for GHDL), setting up a new simulation is just a matter of copying one and make a few modifications. I can usually do that faster than manually adding files in a GUI. YMMV of course.

4. The ghdl maintainers have tagged a release 1.0.0 but do not provide builds, and for the life of me I can't get the thing to build.

I'm not sure why. Version 0.37 was released with binaries for most supported plaforms. (0.37 is still perfectly usable, btw.) No binaries for v. 1.0. And some binaries for the nightly version, but no Windows binaries so far for it.

Building it is no issue though on Linux, or using MSYS2 on Windows. I routinely build the latest revision on both.
If that can help, here is the configuration I use for building it on MSYS2:

Code: [Select]
./configure --prefix=<directory where you want it installed> LDFLAGS=-static --enable-libghdl --enable-synth

and then:

Code: [Select]
make GNATMAKE="gnatmake -j2"
make install

On MSYS2, you need to have of course dev tools installed, GCC and ADA support for GCC.
« Last Edit: July 16, 2021, 08:57:12 pm by SiliconWizard »
 

Offline ali_asadzadehTopic starter

  • Super Contributor
  • ***
  • Posts: 1904
  • Country: ca
Re: Hitting a breakpoint after n times!
« Reply #20 on: July 17, 2021, 07:14:02 am »
Quote
Hi,

QuestaSim (and I suppose that ModelSim too) allows you to use conditional breakpoints. In the condition you can use signals in your design (not variables, though). In any case this is something I very rarely used.

On complex designs what I've used are highly instrumented testbenches capable of detecting when something goes wrong. The testbench is able to write to a file the time of the offending event(s) and the kind of problem(s) detected. Subsequent runs of the simulation (maybe aided with some TCL automation) uses such report to launch the adequate testbench vector sets to get more insight on such issues. e.g.: your testbench sequentially applies many vector sets to the DUT, and when applying the "exercise_cache" vector set, detects that at time t0 from vector set start there's some memory inconsistency; thus the 2nd run of the simulation reads such issue report and launches a simulation of the DUT saving all waveforms related to the memory system and using only the same "exercise_cache" vector set that fired the error on the 1st run. In fact many times the TCL automation is replaced by visual inspection of the issue report and manual invocation of the appropriate simulation vector set.

The main goal of this approach is saving simulation time. The 1st simulation, not saving waveforms, or a small number of waveforms (for example, saving only the signals that the testbench uses to indicate when each vector set starts/ends/passes/fails) to disk uses to run (much) faster than the other simulations saving lots of signals. When the simulation time is measured in hours or days this is important. Also, when this kind of simulation is also used by designers to debug or improve the design (not only as a compulsory stage of a certification process), a faster simulation that ends with message "Aggregated results (checks/passed/failed): X/X/0" from the testbench is really useful, preferable and even satisfying instead of a longer simulation that, at completion, requires you to navigate through a high number of waveforms over millions of clock cycles.

Regards.


p.s. It's me or is TCL a really buggy and detestable language/environment?
Nice tips, do you have a very simple open source design to share your ideas?
ASiDesigner, Stands for Application specific intelligent devices
I'm a Digital Expert from 8-bits to 64-bits
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
Re: Hitting a breakpoint after n times!
« Reply #21 on: July 17, 2021, 11:15:17 am »
You do realize that you can run sims without any waveforms in the waveform window.  And in your testbench code, you can log to disk only the important results and key moments you like.

I also think you can turn on and off, or log your own waveform at will.

I've done this in my ellipse test bench logging only the output coordinate as an example.
Though, it also generates a .bmp picture so you can see the results.
Basically, you would just be running something like my testbench code, but, without any waveform traces enabled.
 

Offline emece67

  • Frequent Contributor
  • **
  • !
  • Posts: 614
  • Country: 00
Re: Hitting a breakpoint after n times!
« Reply #22 on: July 18, 2021, 01:38:47 pm »
.
« Last Edit: August 19, 2022, 04:32:26 pm by emece67 »
 
The following users thanked this post: ali_asadzadeh

Offline Bassman59

  • Super Contributor
  • ***
  • Posts: 2501
  • Country: us
  • Yes, I do this for a living
Re: Hitting a breakpoint after n times!
« Reply #23 on: July 19, 2021, 05:10:07 am »
2.  I usually just don't bother doing my ModemSim work through Quartus.  In fact, I do all my development work in Modelsim from scratch.

I started using ModelSim well before the FPGA vendors started integrating it into their design-tool flows. The standalone environment is so much easier and more flexible. The IDEs seem to want to create useless test benches and I could never figure out how to just have it point to mine. Also there's no way to have the IDE start a simulation of a lower-level entity's test bench, at least not in any reasonable way.

And once I figured out how to bypass MicroSemi's Libero, I use Synplify Pro and ModelSim and the fitter (designer) all standalone. It's almost like the FPGA vendors don't know how professional engineers work ...

Quote
I leaned the transcript commands and have my own 'do xxx.do' script files to compile and run my sims.  This method seems to never trigger Modelsim's slow down and re-compiles are usually near instant so long as you just 'vlog' when necessary VS the occasional need to re 'vsim' which are all usually setup in my script files anyways.  As for Altera libs, after running a sim once from Quartus, I just look at the -L library names added on the 'vsim' command line and add those into my compile script's 'vsim' line and I get access to Altera's megafunctions.

All of the Altera libraries should be already pulled in to your project because they're listed in the default modelsim.ini file. As for scripts, I find it easier to simply set up "simulation configurations" in ModelSim. I'm too lazy to work out the command line.
« Last Edit: July 19, 2021, 05:15:24 am by Bassman59 »
 

Offline Bassman59

  • Super Contributor
  • ***
  • Posts: 2501
  • Country: us
  • Yes, I do this for a living
Re: Hitting a breakpoint after n times!
« Reply #24 on: July 19, 2021, 05:31:14 am »
I honestly cannot tell whether it's "faster" than the free ModelSim ME as provided by MicroSemi. I suppose that if a design was large enough to trigger the intentional slow-down in the free ModelSim then yes, ghdl would be obviously faster.

GHDL supports two modes. When it's said to be faster than most other simulators, it's for one of its modes: it can build an executable from a given VHDL design, and simulate it by running the executable. As it creates optimized code for a given simulation, it's much faster than conventional simulation. Another benefit of this mode is that, as it creates an executable using an ad-hoc compiler based on GCC, you can also perform code coverage analysis using 'gcov'.

GHDL also supports another mode: mcode, which simulates without building an executable first. It's significantly slower, and usually even slower than many commercial simulators which are multi-threaded. GHDL isn't.

I still mostly use the mcode mode because it's the most convenient one. But if I ever need to simulate very large designs for a very large number of cycles, then I can use the other mode.
The mcode mode is also much easier and faster to build. (If you want to build GHDL yourself.)

For version 0.36, they published macOS binaries for both mcode and llvm. But for 0.37 they provided only mcode. So that's what I've been using.

Oh, they also only had those versions (0.36 and 0.37) as llvm for 64-bit Windows; for 32-bit Windows only mcode. I don't understand. surely it's a dependency thing.

Quote
As to mixing VHDL standards, I don't think it's much of a problem. True that you can't mix standard versions, but, normally, if you enable VHDL-2008 support in GHDL, it shouldn't have issues compiling code written for older VHDL revisions. I have never run into an issue with this. If you have, could you give me an example of, for instance, VHDL-93 code that wouldn't pass using VHDL-2008 support?

Oh, just try building the Xilinx ISE libraries with ghdl in -2008 mode. What makes no sense to me is that analysis is really the language-version-dependent part, not elaboration, unless there are obscure linkage/binding things that are so different in 2008 from previous that they can't manage it.

3. ghdl would really benefit from a nice GUI-based project manager. Makefiles "work," but that's a lot of manual fiddling. ModelSim's, er, uh, model (or paradigm) works well: sources are listed, "simulation configurations" are listed, and it just works. I actually started to write a macOS program to do this sort of management but I got bogged down in all of that and I realized I had better things to do.

I personally don't care about having a GUI for this. But, if I'm not mistaken, Modelsim is also command-line based under the hood, and some people only use it on the command-line?
Point is, writing a GUI for GHDL shouldn't be much of a problem. Not sure there is enough interest for that out there though. Once you've written a couple Makefiles (or scripts, you don't even need a full-blown Makefile for GHDL), setting up a new simulation is just a matter of copying one and make a few modifications. I can usually do that faster than manually adding files in a GUI. YMMV of course.[/quote]

Being a daily-driver of ModelSim for, oh, good god, the Clinton administration, I know all about it, especially that it's command-line "under the hood." And I take advantage of that. But when your projects have a hundred source files, and the projects include test benches and simulation configurations for all of the subentities, then having a GUI that shows compile status, compilation order and all of that is, you know, really nice. And the simulation configurations are quite handy. An entity or a configuration has different top-level generics? Cool, just set up different simulation configurations with the different generics and off you go.

All that said, the ModelSim GUI hasn't really changed since the Clinton Administration (ok, they finally added "print waveform" at some point) and it's really due for a refresh.

Quote
4. The ghdl maintainers have tagged a release 1.0.0 but do not provide builds, and for the life of me I can't get the thing to build.

I'm not sure why. Version 0.37 was released with binaries for most supported plaforms. (0.37 is still perfectly usable, btw.) No binaries for v. 1.0. And some binaries for the nightly version, but no Windows binaries so far for it.

Building it is no issue though on Linux, or using MSYS2 on Windows. I routinely build the latest revision on both.
If that can help, here is the configuration I use for building it on MSYS2:

Code: [Select]
./configure --prefix=<directory where you want it installed> LDFLAGS=-static --enable-libghdl --enable-synth

and then:

Code: [Select]
make GNATMAKE="gnatmake -j2"
make install

On MSYS2, you need to have of course dev tools installed, GCC and ADA support for GCC.

I should say that I haven't tried to build it on macOS in quite some time. Maybe it's easier now. But, if it was easy, why don't they do it?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf