### Author Topic: How is it decided what to "bin" in a design for functional verification?  (Read 1075 times)

0 Members and 1 Guest are viewing this topic.

#### matrixofdynamism

• Regular Contributor
• Posts: 194
##### How is it decided what to "bin" in a design for functional verification?
« on: June 23, 2024, 12:07:26 am »
I am trying to learn about a new topic called "functional verification". I have been writing directed tests for many years and never had to use constrained pseudo-random stimulus based testbenches. However, I am trying to familiarize myself with this topic.

So what I understand right now is that there is a concept of "bin" which is tied to something of interest in the design. If that signal takes certain values that fall into that bin, a counter intrinsic to that bin is incremented and this event is called a bin being "hit". At the end of the test, we can see what bins were not hit and work out if the stimulus is not being generated correctly or something else is preventing it from being hit e.g a design bug or bug in the testbench.

What I just described is called point coverage. There is another concept called cross coverage. Here, we are trying to hit more than one point at once. This means that one thing of interest happens at the same time as another thing of interest.

Now I will give example of what I understand. A counter has enable and clear signal. Point coverage will cover that enable signal and clear signal took the required values '0' and '1' during the simulation. Point coverage for output could be that the output took all possible values or maybe the extreme values min anx max. Now cross coverage would be doing clear and enable at the same time. Or doing clear and enable for every possible counter output or specific values of counter output?

Is my basic understanding correct?

#### Rainwater

• Regular Contributor
• Posts: 57
• Country:
##### Re: How is it decided what to "bin" in a design for functional verification?
« Reply #1 on: June 23, 2024, 01:43:07 am »
Is that anything like formal verification? Where you assume some inputs and assert some output conditions?
Thats what ive stared learning, it sounds like what you are trying, but without writing test beaches.

You write the assumptions and assertions in your hdl, then run a formal solver, which tries very hard to break your assertions. If it can, your given a trace file showing how it did it.
Its great for when you optimize a design and want to check if it got broke.
"You can't do that" - challenge accepted

#### matrixofdynamism

• Regular Contributor
• Posts: 194
##### Re: How is it decided what to "bin" in a design for functional verification?
« Reply #2 on: June 23, 2024, 02:41:23 am »
Formal verification is different. With (what we usualyl call as) formal verification we are basically describing a mathematical relations between different signals and events related to them over several clock cycles. Then, we let the tool compare these properties and assumptions and assertions with the RTL code to see under what circumstances the RTL code cannot match those properties, assumptions and assertions. In this way, we do not need to write tests or create testbenches.

Formal cannot be used with all designs. Also, the tools that can do this type of formal verification are expensive.

#### Rainwater

• Regular Contributor
• Posts: 57
• Country:
##### Re: How is it decided what to "bin" in a design for functional verification?
« Reply #3 on: June 24, 2024, 02:28:27 am »
After reading up, i think this is called "branch coverage" in C/C++.
In C the idea is to setup a test bench which exercises each line of code. But not in every possible condition the software could be in. At the lower level, this is easy, but once you have a few nested functions, the exercise gets much more difficult. As triggering low level errors and tracking their path back to the higher functions can revealed "dead stop", situations which the application can not proceed, either because the error isnt handled, or the app doesnt know what to do so the procedure fails. Branch coverage allows you to measure how much of your code is being tested by the test bench.
I have used this style of debugging in order to vet important functions and methods, it get super tricky when dealing with virtual inheritance as a class can take multiple and recursive paths.

I think the same concept would be an invaluable tool in hdl, but after a lot of reading, it seams the functional verification has started being superceded by formal verification.
The artical here goes into some detail about when each is better than the other, but concludes that formal is best when properly implemented, and functional verification is best when you dont have the resources to use formal

https://theartofverification.com/formal-verification-vs-functional-verification-a-tale-of-two-approaches/
the tools that can do this type of formal verification are expensive.
yosis is free.

Simple things are easy, until their not. Ive been verifying a alu for academic purposes, using it to exercise every part of the verilog language, and sby.
Then using formal verification to double check common methods and solutions to timing problems. To ensure I dont break stuff without realizing it.
Then swapping assumptions to assertions and verifying that i am using my own modules correctly.
Its been a very involved experience
« Last Edit: June 24, 2024, 02:35:11 am by Rainwater »
"You can't do that" - challenge accepted

#### laugensalm

• Regular Contributor
• Posts: 123
• Country:
##### Re: How is it decided what to "bin" in a design for functional verification?
« Reply #4 on: June 25, 2024, 12:42:58 pm »
Formal verification via symbiosys is indeed free, training maybe not so much.

The bin concept is somewhat trivial for a static coverage where a number of expected states is reached and in many deterministic cases this doesn't turn out too complex for most FSMs. However it can get complex, once states are passed down to pipelines such as in a CPU core, so the static analysis will not suffice, thus the coverage tests tend to have more of a dynamic character and tools will not necessarily detect the cross dependencies correctly.
I'm not really convinced that it will be possible to go 'formal' with all sorts of designs (take a complex LFSR feedback RNG), however much can be done on a higher level using special 'decorators' for pipelines when you've already decided who you trust(modelling, synthesis, simulator..)

Question always is a bit on what level you want to do your coverage assertions. Some folks use classical profiling to coverage-test their HDL simulation executable (spot unreached code), some insert the histogram logic into their HDL source/intermediate representation which could be regarded as too intrusive...

Smf