> param2 <= (param2 * 10) + unsigned(dispByteLatch(3 downto 0); -- just use the low bits
I get an error expression has 14 elements, but must have 7 elements on this.
WHY are there to different methods/types? "integer range 0 to 127" vs "unsigned(6 downto 0)". Or what about "std_logic_vector(6 downto 0)". Is there a reason to use one type over another? Do each have advantages? Is there a penalty when you have to convert one type to another?
Now get into the meat of the problem.
INTEGERS are conceptually numbers - you can't ask for the 2nd binary digit of an integer directly. They support the range of a signed 32-bit number.
An individual STD_LOGIC signals must have one of nine different values:
'U': uninitialized. This signal hasn't been set yet.
'X': unknown. Impossible to determine this value/result.
'0': logic 0
'1': logic 1
'Z': High Impedance
'W': Weak signal, can't tell if it should be 0 or 1.
'L': Weak signal that should probably go to 0
'H': Weak signal that should probably go to 1
'-': Don't care.
A STD_LOGIC_VECTOR is an array of STD_LOGIC. This allows you to address individual 'slices' of the array, or manipulate individual bits with logical functions. However you can't perform math on them - they are just a list of STD_LOGIC values. And as an array, they have a length (and other) attributes that an INTEGER doesn't have.
UNSIGNED and SIGNED is an extension of STD_LOGIC_VECTOR, the difference being that you can do math with with them. When you do math, the elements in the vector take on the a place value of 2^n, except for SIGNED when the highest bit has a value of -2^n. (where n is zero for the leftmost bit).
There is no penalty when you convert from one type to the other (at least when running in hardware). The more explicit you are about the data type you want, the better the chance that you will get what you actually intended and not what the tools are forced to chose for you.
The error you are getting is because you are throwing away the most significant bits when you store the result back into param2 - when you multiply by 10, you are adding at least 4 more bits of data (for example, 127*10 = 1027, which needs 11 bits).
It is the language being useful and telling you that you are throwing away useful information. It doesn't know you are limiting the input to two digits of ASCII, so only values of "0000000" to "1100011" are expected.