The absolute value may have a large tolerance, but the voltage is quite stable. So after you have measured the voltage for each device and stored the value into EEPROM, it will be sufficient accurate for most measurements using the internal 10bit ADC.
Why 1.1V and not 2.56V: The reference voltage can only be lower than the supply voltage. Many AVRs work at 1.8-5.5V, so 2.56V reference voltage simply does not work.
So my question: is Atmel internal (bandgap) reference really that crappy or may I safely trust it?
I wonder why 1.1 instead of the standard 1.2 bandgap reference?
It's especially puzzling given that all AVR [voltage] references are *derived* from the standard 1.2[5]V bandgap reference.
I wonder why 1.1 instead of the standard 1.2 bandgap reference?
That seems to be quite different from the initial question you asked.
I was also wondering why 1.1V ? Other Atmel have a 2.56V reference, which makes much more sense to me. So what is the use of a 1.1V reference — BTW a bandgap reference is about 1.2V so I also wonder why build an... amplifier to output a lower voltage potentially with less accuracy
It's especially puzzling given that all AVR [voltage] references are *derived* from the standard 1.2[5]V bandgap reference.
I don't think that's true, at least not per atmel.
The internal 1.1V reference is generated from the internal bandgap reference (V BG ) through an internal amplifier.
The internal 1.1V reference is generated from the internal bandgap reference (V BG ) through an internal amplifier.
The internal 2.56V reference is generated from the internal bandgap reference (V BG ) through an internal amplifier.
If you have to buy and test a chip *before* you can estimate what the resistor values will be, what's the point of such a variability? It only increases design duration and forces designers to recalibrate their projects on a one-on-one basis. Well, not exactly because instead they'll chose an external voltage reference that they trust *is* stable across manufacturers production.
If you have to buy and test a chip *before* you can estimate what the resistor values will be, what's the point of such a variability? It only increases design duration and forces designers to recalibrate their projects on a one-on-one basis. Well, not exactly because instead they'll chose an external voltage reference that they trust *is* stable across manufacturers production.
Many voltage references need to be trimmed if you want a fixed reference voltage. If you buy a voltage reference the manufacturer has done that step for you already during production by adusting a voltage divider at the output amplifier.
Atmel skipped this step, because it adds additional costs.
I'm not quite sure you've got the point here though. Sure I guess the reference voltage is *stable* on on given chip. The thing is there seems so much variability across chips that reference seems not practical for determining component values — such as dividing resistor networks — *before* you buy the chip.
So, given that every hassle must have a well-founded ground, why 1.1V? Does *that* value have some unquestionable practical reason for being selected by the designers?
I'm not quite sure you've got the point here though. Sure I guess the reference voltage is *stable* on on given chip. The thing is there seems so much variability across chips that reference seems not practical for determining component values — such as dividing resistor networks — *before* you buy the chip.
The other option is to compensate/calibrate in software. Store calibration in EEPROM.
So, given that every hassle must have a well-founded ground, why 1.1V? Does *that* value have some unquestionable practical reason for being selected by the designers?
Why it that worse than any other value?
Sometimes I think we just loose all context when discussing topics like this. An Atmel uC uses it's various option reference voltages (1.1,2.5,Vcc,external) to support it's on chip ADC subsystem. Then we would have to know which Atmel series specifically. If say a standard 328P is being discuss then the reference options selected only needs to be able to support a 10 bit ADC resolution.
ADC on such uCs are very handy and useful to be sure but should not be considered Instrumentation quality ADC capable.
Better include no ref circuitry at all in such cases and let designers whack in an external reference, which proves way more consistent even across suppliers. I still don't see any valid, logical justification behind this choice.
Better include no ref circuitry at all in such cases and let designers whack in an external reference, which proves way more consistent even across suppliers. I still don't see any valid, logical justification behind this choice.
ATmega48PA/88PA/168PA/328P features an internal bandgap reference. This reference is
used for Brown-out Detection, and it can be used as an input to the Analog Comparator or the
ADC.
You can add one if you like, but the internal reference that comes for free uses only 10uA.
But just in case: a bandgap reference *requires* the manufacturer to tweak the internal resistor values anyway. Bandgap voltage is about 1.25V by definition. So why *add* components that a) would require being at least as accurate and stable as the bandgap reference and b) if they do (add components) why stop half-way and leave an impractical voltage reference that potentially varies with each chip they release?
I wonder why 1.1 instead of the standard 1.2 bandgap reference?
Atmel's voltage references seem stable across temperature ranges, right?
Because the analog designers Atmel had work on the ATmega series were idiots.
The ADC is terrible, and that's at 10 bits!
[...] your characterisation of the reference is going overboard, the data sheet excerpts attached below tell a different story.
Atmel's voltage references seem stable across temperature ranges, right?
Not very. Can't remember the numbers, but the temperature error was way too much, so I had to implement temperature compensation along with my software calibration, which is rather easy when there's the internal temperature sensor present in some chips.
I think the reason for the inaccuracy might be that they have originally designed the reference for the brown-out detector. That could also have something to do with the voltage selection. But primarily, they have designed the reference for low current consumption, accuracy is secondary as it is not important at all for the BOD.
Even if it was one order of magnitude more accurate, you would still want to use software calibration for anything accurate.
I don't understand your idea of requiring different external resistor values; why not do it in software like everybody else? It's not that inaccurate that you would lose too much usable range this way.
use software calibration for anything accurate.I can understand but that might not be quite easy. Take the ATtiny1634, for instance, which I've selected for one of my projects. It has no DAC...
You have overlooked something quite fundamental; a bandgap reference uses the difference between the forward voltage of two PN junctions i.e. it requires a bipolar structures and an AVR uses a CMOS process. A CMOS process can implement bipolar transistors only as parasitic devices and their performance doesn't tend to be very good.
This would normally be my first assumption but if the other ATMEL processors use a 1.25 volt bandgap reference, then how likely is it that they have a different process which produces a 1.1 volt bandgap reference?
Take the ATtiny1634, for instance, which I've selected for one of my projects. It has no DAC and I can't see a way to determine the voltage reference other than by using an external, variable, precision voltage that you feed through the comparator.