I've looked an toyed with OSVVM, it's a well thought out framework, but ultimately I've failed to see any benefit compared to writing testbenches with just VHDL. I can't say for the 2 others.
The master pain with all these kludgy methodologies is indeed that most developers still clearly distinguish between hardware and software and are not taught the bridging concepts that already exist
Same goes for verification, if modern tools such as Python can create self-verifying testbenches, the distinction between developer and verification team is kinda void. (ok, kinda not, in the ASIC domain). Not everyone likes management to hear about that :-)
Even though being turing complete, VHDL lacks some procedural features to keep complex simulations maintainable.
Even though being turing complete, VHDL lacks some procedural features to keep complex simulations maintainable.
Could you give some examples of the procedural features that you had in mind?
Even though being turing complete, VHDL lacks some procedural features to keep complex simulations maintainable.
Could you give some examples of the procedural features that you had in mind?
Even though being turing complete, VHDL lacks some procedural features to keep complex simulations maintainable.
Could you give some examples of the procedural features that you had in mind?
Ok, here's a few. I admit that I'm spoiled by Python, and you might argue this is not needed/can be done differently:
* Lack of yield` alike mechanisms to create well readable co-routines
* `generate` limitations to architecture level
* Lack of built-in language constructs to separate synthesis from simulation specific statements (I'm aware of vendor specific pragmas). This is also related to the above generator concept of the pythonic `yield` combined with decorators or factory classes.
* No possibility to procedurally create interfaces (inline)
I really don't want to ignite any fires, but my personal dogma is just: minimal coding and keep things in places where they belong.
And this is all based on the thumb rule, that code should be maximally portable, i.e. *one description in one place* translates to several targets (be that some LLVM IR, yosys RTL, V*) where one counts as the golden reference to co-simulate against.
Isn't there are risk that some of those are the moral equivalent of using SQL+database where synchronous RPC is what's needed. (Yes, I've seen something very close to that, remarkably. Clearly that developer only had a hammer in his toolbox!)
Isn't there are risk that some of those are the moral equivalent of using SQL+database where synchronous RPC is what's needed. (Yes, I've seen something very close to that, remarkably. Clearly that developer only had a hammer in his toolbox!)
That analogy is a bit far fetched, honestly. And doesn't apply, because those mechanisms are something that makes code (c)leaner, not introduce overhead - once you've wrapped your mind around it. But yes, one has to get familiar with a few modern language paradigms that are more common in the SW world, but can be applied to HW generation as well. I guess I'd have to elaborate on those, but I don't want to hijack the thread any longer with too much pythonism, there's a load of examples up already (https://github.com/hackfin/cyrite.howto) for those not hating Jupyter notebooks.
Ok, here's a few. I admit that I'm spoiled by Python, and you might argue this is not needed/can be done differently:
* Lack of yield` alike mechanisms to create well readable co-routines
* `generate` limitations to architecture level
* Lack of built-in language constructs to separate synthesis from simulation specific statements (I'm aware of vendor specific pragmas). This is also related to the above generator concept of the pythonic `yield` combined with decorators or factory classes.
* No possibility to procedurally create interfaces (inline)
The SQL/RPC analogy is stretching it, but isn't that also true of your "* Lack of yield alike mechanisms to create well readable co-routines"?
* When it comes to variants of DUTs, whether it's interfaces or something else, test(bench) parameterization is the way to go. VUnit advocates that generics/parameters are passed to the top-level testbench such that relevant combinations can be generated and applied by the Python test runner. Every such parameterization of a test(bench) is called a VUnit configuration. This is not to be confused with VHDL configurations. They can also be used but they don't scale well as the number of combinations grows (see https://vunit.github.io/blog/2023_08_26_vhdl_configurations.html).
The SQL/RPC analogy is stretching it, but isn't that also true of your "* Lack of yield alike mechanisms to create well readable co-routines"?
I don't think so. It's a rather fundamental language construct. In fact, it's the key to a number of elegant solutions with the advantage of not having to invent (yet) another verification language or extension.
For example, it enables one to build thin co-simulation and verification layers on various levels, be it as a co-sim that speaks VPI to a state of the art simulator
to do lock-step verification or a high level pipeline inference that makes sure that the inferred logic calculates the same as the Python 'native' model written by a pure SW mind. This is heading towards 'inline verifikation', but without the '#ifdef' alike messups. Last, but not least, you can get your own code coverage-proven, i.e. show that there is no unvisited code left in your design. You won't get there that easily with translaction layer models or HDL enhanced by RPC mechanisms.
There one specific things about VHDL that bother be the most since I have used C and SV. It is that it does not contain #if which can be used to compile code based on conditions.
..
That sounds close to wanting to write Fortran in an HDL. Too close for my comfort.
Parallelism is inherent in any HDL, and can be used when writing tests. Use what the language provides, to its fullest extent. Only when it can be demonstrated that the language is insufficient should such things be considered.
Thus I don't see the need to introduce another mechanism from a very different domain.
Can you describe the salient features of such a "state of the art simulator", and why the HDL is insufficient.
"For my convenience because I think that way" is an insufficient justification
A "pure SW mind" is a bad thing to have anywhere near hardware.
C style #ifdefs are a revolting concept that had some value in 1980, when compilers and memory were very limited. No modern language has them, for many very sound reasons.
Anyway, I really wish that some academic minded person would compare the VUnit, OSVVM and UVVM. The UVVM forum does not seem to be very active. I posted two questions and never got a reply at all even after many days.
I really wish that instead of VUnit, UVVM, OSVVM, Cocotb e.t.c. there would be one great solution. But such solution would come from industry which it hasn't so we are all left to create open source solutions.
..
That sounds close to wanting to write Fortran in an HDL. Too close for my comfort.
Parallelism is inherent in any HDL, and can be used when writing tests. Use what the language provides, to its fullest extent. Only when it can be demonstrated that the language is insufficient should such things be considered.
Thus I don't see the need to introduce another mechanism from a very different domain.
Again, no.
Fortran has absolutely no relation to this. You might want to check up on the yield concept of modern languages (Python, C++). Then you will understand that this is the way for building simulators. It's not a different domain either, because the yield mechanisms exactly allow you to model parallelism or clearly denote sequential processes, depending on context.
What is insufficient about the concepts and constructs that already exist in every HDL for parallelism?
Wanting to add "alien" constructs to any language just because they are "neat" is not a valid reason. That way leads to C++.
What is insufficient about the concepts and constructs that already exist in every HDL for parallelism?
Wanting to add "alien" constructs to any language just because they are "neat" is not a valid reason. That way leads to C++.
I am not saying that there are issues WRT parallelism in a HDL hardware *description*. You just can't write a VHDL simulator (and or any sort of procedural inline-verification) in VHDL itself. That's why all these different external frameworks are needed.
Please, again, read up on the 'yield' mechanisms before you call them 'alien'. They're as basic as a function return, and no, I have absolutely no desire that functional iterator concepts are ever added to a VHDL-Draft at all.
But again, if you'd want to discuss the advantages of Python as HDL and verification setup, I'd suggest to open up another thread.