That doesn't make sense from a design perspective. In an output word the least significant bits affect the resolution while the most significant bits affect the magnitude. If you masked the most significant two bits to zero in a 12 bit word you would completely change the output value, whereas if you masked the least two bits to zero you would keep the same value but lose a bit (two bits) of precision. If you wanted the different chips in the family to be plug-compatible you would want to be masking off the lower two bits of the 12 bit word to leave a lower precision 10 bit word.
Although I have not had a chance to read the datasheet, I would be surprised if the chip designers dropped such a clanger.
Yeah, I see my error. Corrected code for 12-bit D/A using only 10 bits should have been:
SPI_DATA <= 4 config bits
SPI_DATA <= (temp >> 9) & 1
SPI_DATA <= (temp >> 8) & 1
SPI_DATA <= (temp >> 1) & 1
SPI_DATA <= temp & 1
SPI_DATA <= 0
SPI_DATA <= 0
EDIT: Now that I really think about it, the previous code snippet I posted would work depending on how Dave worked out the arithmetic of value
before storing into temp
for the shift operation (which isn't shown). I'm confusing the crap out of myself.
EDIT EDIT: Ok. Head straight. Direct-no-modification interchangeability between 12-bit and 10-bit D/A won't work without some code modification.
If 2-bit zero mask is in MSB, then interchanging from 12-bit to 10-bit requires code modification because the 2-bit don't-cares is the last two bits sent on a 10-bit. For 12-bit, there's no scaling required for value, but for 10-bit, the zero mask will cause only 8-bit ranging if the value isn't scaled by a x4 multiplier.
If 2-bit zero mask is in LSB, then interchanging also requires code modification. 12-bit D/A would have to /4 scale for the desired mapping; popping a 10-bit without getting rid of the /4 scalar would limit output to 255.
Was the D/A even supposed to be interchangeable? Why bother if it can be shown that one performs better than the other?