I figured a way to get the full 13 bit resolution of the PWM and now I get the full value range. I figured that my timer counter being 8192 is just a shift of 13 so I can avoid overflow by not shifting up 13 and down 10 but rather just do the entire calculation with an additional shift of 3.
The ADC on the face of it is a lost cause, it is only 10bit accuracy so I only get a change every 8 values of the PWM. Rick, I'd be interested to hear about your ADC averaging, I started to think about it and wonder if it's worth spending time to do it.
...
...
Hey, I am actually glad you asked. This may be re-inventing the wheel, but I was so pleased with myself, as a newbie, figuring out a good way to do this...
First, here is a percent error graph so you can determine if this is worth the effort - and it will be a good amount of work!!!
The graph is measuring a capacitor discharge from 5V with an UNO with ATMEGA uncompensated, then measure it again using my board with my compensation at ranges 5V and below. %Error is as compared to the UT61e reading of the voltage. The
black is the ATMEGA
uncompensated, the different color plots are the different ranges I have (at different op-amp multiplier or resister divider). By the way, I call it my "DinoMeter" because I was using a dinosaur era laptop for data logging.
This was first posted a year ago when I was even more of a newbie than today:
https://www.eevblog.com/forum/microcontrollers/an-implementation-of-atmega328-volt-logger-the-dinometer-a-learning-project/msg316233/#msg316233Since the STM has only 8K flash, I am not sure it has enough RAM to implement this method.
In a nutshell, the method is to store the
real world (DMM measured) voltage and use the ADC count as an index to a lookup table. So, apart from LSB jitter, once it get the ADC reading, the program looks up in the table the stored DMM-measured voltage for the ADC reading.
So, in theory, as long as the ADC is consistent, you get the right voltage within math-accuracy.
Implementation is of course limited by RAM. To reduce the RAM usage, rather than storing 1024 readings for the 10 bit ADC, I store about
400 readings at 16 bits each.
At low ADC, the graph is least linear, so compensation entries has to be frequent. At higher ADC, the reading is more linear so compensation entries can be less frequent. The table entry is continuous (1 entry per ADC count) till 250, and then 1 entry per 5 ADC count above that totaling 250+156 readings.
At ADC>1020, I have 2 ADC count per entry to make it easier to deal with overflow.
Reducing 400 entries to about 250 is feasible and (if memory serves) still give rather decent accuracy.
That totals 406 entries per range - each range being different voltage divider and/or different op-amp multiplier. In my implementation, I had and 5 divider and op-amp multiplier choices, so in my "DinoMeter", it has 5 voltage range for collection for each ADC.
With that many ranges, I have to reduce the RAM need further. I use 16 bit entries instead of a long or a float giving me
~ 4 digit math-accuracy. I make use of the fact that (say for 5V range)
ADC_Translated voltage = ADC*5V/1023
should not be magnitudes off from DMM reading, the factor should be near 1.
If DMM reading is twice the ADC_translated voltage, the factor is 2.
If DMM reading is half the ADC_translated voltage, the factor is 0.5.
Now I have a much smaller magnitude number to deal with then if I store the exact DMM reading which could range from mV to 30V for my ranges. I use the 16 bit unsigned int with implied decimal.
65535 => 6.5535, DMM reading is 6.5535 times the translated voltage
20000 => 2.0000, DMM reading is twice ADC_translated voltage
05000 => 0.5000, DMM reading is half the ADC_translated voltage
So, if the ADC is perfectly consistent, I get back exactly the DMM reading I stored when I calibrated the reading for that ADC - within math-accuracy.
This arithmetic method gives me 4 digit math-accuracy with 16 bit numbers, and any op-amp/voltage divider non-linearity issue are already taken into account since the correction makes it exactly what the DMM was reading within math-accuracy.
To account for LSB +- digit jitter, I collect multiple readings and average them. If it averages to ADC=123.4, I interpolate the reading between ADC=123 and ADC=124 for ADC=123.4.
I was using a time-based average (average over 250ms) verses count-based average. I don't recall how many ADC conversions the ATMEGA can do in 250ms, I think it was around 70 for most of my readings.
The collection of the calibration data is not difficult after I wrote the data logger. I even made sure it skips a reading when the UT61E changes range as it has a habit of giving wild reading during range change.
I use a capacitor discharge and let it sweep the entire range slowly. The discharge must be slow enough that I get at least 20 readings for each ADC count. A JAVA program reads the ATMEGA and the UT61E. I let it run overnight (and longer), the JAVA program fills my JAVA array [1024] for that range. Now I know the average UTE reading for each ADC count. An Excel spreadsheet takes the export of the array in text and automatically converts the five arrays of data into code which I insert after the PROGMEM:
// the low is for the low adc reading from 0 to 248
extern PROGMEM prog_uint16_t CalDataLow[4][5][249] = {
{ // Start of A0
// Start of A0 x-24
// Dataset Series number A5C1 (-24X check sum 1015.67648024476 TableSum 4009641)
{ 0,// A0@-24:1 (691231.190000)
0,// A0@-24:2 (691231.190000)
0,// A0@-24:3 (691231.190000)
5612,// A0@-24:4 (131123.094922)
9513,// A0@-24:5 (131123.094922)
…
9962,// A0@-24:246 (131123.042149)
9961,// A0@-24:247 (131123.042040)
9961,// A0@-24:248 (131123.041933)
9961},// A0@-24:249 (131123.041828)
…
…
So on, so on. You can see that my Arduino A0 with opamp/voltage divider at -24x, when ADC0=246, the correction factor is 9962.
So, the real world voltage from DMM was (246*(-24V)/1023) * (9962/10000)
//
// the high is for ADC reading 250 and up, 1 entry per 5 adc reading
extern PROGMEM prog_uint16_t CalDataHigh[4][5][156] = {
{ // Start of A0
// Start of A0 x-24
// A5C1 (-24X check sum 1015.67648024476 TableSum 4009641)
{ 9961,// A0@-24:250 (131123.041721)
9961,// A0
…
9222},// A3@-1x:1022 (131125.015115)
// Start of A3 x1
// A5C1 (with 5*675K discharge) (1X check sum 1036.06042214359 TableSum 4173638)
{ 10065,// A3@1x :250 (131128.140130)
10064,// A3@1x :255 (131128.135438)
10063,// A3@1x :260 (131128.134753)
10060,// A3@1x :265 (131128.134113)
10058,// A3@1x :270 (131128.133444)
10059,// A3@1x :275 (131128.132815)
…
For Arduino A3 when op-amp/voltage divider is at 1x with average ADC3= 272.15, my DMM voltage would be:
DMM_volt at 270 = (270*5V/1023) * (10058/10000) call this V270
DMM_volt at 275 = (275*5V/1023) * (10059/10000) call this V275
DMM_volt at 272.15 = is 2.15 over 270 and below next reading at 275
So, V 272.15 = V270+(V275-V270)*2.15/5
As to how well (or not) the compensation do, you can see the data I collected about a year ago. I actually doubt you have enough RAM to do it at the same level. I can dig up my Excel sheet, and do an analysis on if you cut the table down to 1/2.:
Say every other ADC entry up to 248, then one entry for every 10 ADC count after
If I recall, down to 256 total entries per range was still giving me good numbers with Excel analysis - I did analysis by using the real adc count for the skipped entries, compare to the interpolated value for the skipped entries, the delta would be the error penalty for skipping. But when I was done with the program, I still had RAM left so I expanded the table size to use up as much of the remaining flash space as possible.
Rick