Attached is an archive of a small test design I hacked up to test the two suggested methods of implementing a command parser that has a way of not firing the case selector for commands we'd like to disable. As a side effect of disabling a command, a signal assigned in that select should also get optimized away if it is not referenced in higher-up code.
For purposes of discussion, the design has three variants: DEFAULT, A and B. In the default, all commands are handled. In A, some commands are disabled. In B, the other commands are disabled. The variant is set with a generic at the top level.
A command sets or clears an output, depending on the bit in the argument. If the command is valid, the signal cmd_ok is strobed. If the command is invalid (because it is disabled) the signal cmd_nok is strobed.
The two methods of disabling commands are:
1. a function is used to "filter" out invalid commands, based on the variant chosen. Commands not handled are "invalid." In the parser case statement, all possible commands are listed as selectors, but the commands not handled in a given variant have been "remapped" by the parser to the invalid command, so their selectors are never invoked.
2. in each case selector, if the variant chosen allows the command, it is handled: the output bit is assigned and cmd_ok asserted, otherwise the output signal is ignored and cmd_nok is asserted.
The source code is three files.
cmd_pkg.vhdl defines a few useful types and two functions. One function handles that remapping and outputs an enumerated type. The other simply converts the std_logic_vector command to that same enumerated type.
parser.vhdl is the parser. It has two architectures: one, called "filter," uses the filter method described above to get the case expression. The other, "nofilter," does as #2 described above and tests each case selector to see whether it should be handled in the chosen variant.
top.vhdl holds six different top-level entities. Three are for the "filter" parser, with variants default, A and B, and three are for the "no-filter" parser. This could have been boiled down to just two entities, filter and no filter, but the synthesizer doesn't allow setting a generic with an enumerated type from the command line. Please note: the top-level entities do not have the signals assigned in "unused" command case selectors. My real design is like that -- those signals simply don't exist.
In the synthesis directory is a Synplify Pro project file. I used the version of Synplify that ships with Lattice Diamond 3.12, which is Q-2020.03L-SP1.
(Interestingly,
this version of Synplify claims to support VHDL-2019's conditional analysis feature -- the preprocessor! But, I didn't test it, because the point was to test how a synthesizer would handle the test cases.)
The project includes the three source files and has two implementations: "top_A_nf" and "top_A_f," for variant A with no filter function and with the filter function. The target is a MachXO2-1200, chosen for no good reason. The correct top level entities are chosen for both.
And the results:
In the filter version, the only obvious synthesis warning is for some input bits not being used. These bits would be assigned to output bits if the command wasn't filtered out. In the non-filter version, the output bits that are not used because their commands don't exist are never assigned, so we're told that. (also the same input-not-used warning is here.)
Let's look at the synthesis outputs.
Here is the RTL schematic of the "filtered command" parser. (The top level schematic is nothing more than ports connected to the parser.) It's down to three flip-flops for the outputs that are enabled (E, F, G) and two more for the OK and NOK flags. The command was turned into a one-hot vector, so three valid commands mean three command bits which are ANDed and drive the flops' enables. Clearly the filter did its job, there is no logic at all for the disabled commands.
Here is the RTL schematic for the "no filter" parser. It's all just ... more complicated.
Now the fun ... the optimized schematics. The top level schematics are the same for both versions, so I'll post only one (the "filter" version):
The parser schematic is highly the same for both versions. Both have one LUT per output (five outputs: three E, F, G flag bits, and the two OK/NOK bits). How the LUTs are configured are slightly different, but they boil down to the same exact resource usage, and of course they are functionally identical. Here is the "filter" parser schematic:
CONCLUSION!Both styles result in the same resource use for this example. The signals associated with disabled commands are optimized away, as are the decoders for those commands. This is ultimately what I wanted.
So the question is, which should I (we, you) use? Possibly a matter of preference.
I will give an edge to the "no filter" version, where the test for whether the command is enabled is in the decoder selector. Why? Because in my design, the number of commands that are common to all variants is much greater than the number of variant-specific commands. It's a lot more obvious to see what's going on if you put the "if VARIANT = THISONE" in the handful of case selectors that care, rather than to bury the command disable/enable in a function in a package in a different source file. We all like to write code that is very obvious to the reader, right? Of course we do!
I hope this was enlightening, and it shows that we don't need no stinkin' preprocessor! All we need to do is to just ignore a few synthesis warnings about signals not being used or being optimized.