I'm not quite sure you've got the point here though. Sure I guess the reference voltage is *stable* on on given chip. The thing is there seems so much variability across chips that reference seems not practical for determining component values — such as dividing resistor networks — *before* you buy the chip.
The other option is to compensate/calibrate in software. Store calibration in EEPROM.
I know. But between whacking a known voltage reference that I'm sure always provides the same voltage level and bothering with software calibration that I'd need to perform on every single board... well, doesn't take me two seconds to chose. Know what I mean? ;-)
Besides, software calibration would require knowing how the reference drifts in different temperature conditions. And Atmel doesn't provide any such information, you're on your own. So if you need some kind of precision even software calibration is a hassle you don't want nor need.
So, given that every hassle must have a well-founded ground, why 1.1V? Does *that* value have some unquestionable practical reason for being selected by the designers?
Why it that worse than any other value?
Huh? I'm sorry, I'm not sure I understand your question.
But just in case: a bandgap reference *requires* the manufacturer to tweak the internal resistor values anyway. Bandgap voltage is about 1.25V by definition. So why *add* components that a) would require being at least as accurate and stable as the bandgap reference and b) if they do (add components) why stop half-way and leave an impractical voltage reference that potentially varies with each chip they release?
So if Atmel did bother with a
1.1V reference instead of leaving the (what I'd qualify) rock-solid bandgap concept, why 1.1V? What does *that* very value have so special that Atmel did select it? And [presumably] stopped the tuning process it to make it consistent between chip releases?
In short: why didn't they get away with the
1.25V bandgap reference only, since it's already implemented in *all* of their chips?
Goodness, isn't my question clear?
EDIT: Okay, in a desperate effort of being 100% clear on what I'm asking.
Here's how a bandgap reference is implemented, as per Paul Brokaw, the inventor:
In that scheme,
R2 *must* be tuned for optimal stability. So if you implement a bandgap voltage reference you *must* undergo that tuning process. Always. Here's an example, still per Paul Brokaw, how to compensate the curvature:
Here's a candidate if you want a different voltage than the standard 1.23V, again from the author:
In this example — which, I presume, is what
bktemp meant and how Atmel implemented their 1.1V reference — you still must tweak R2 but *also* R5/R6!
So why add R5/R6 if they didn't want to tweak them? Why not simply get away with the basic concept, which *requires* only tweaking R2 (as in all cases anyway)? Would have been simpler, more consistent. Besides all schematics based upon Atmel µC would only require *fixed* components (e.g. resistors) of values known from a stage as early as the schematics design. No software calibration required. No guess. Nothing! KISS!
Also note that the compensation results in variations about
two orders of magnitude less than Atmel's 20%! (i.e. a few mV vs. 200mV)
So what *good* reasons did they, instead of the 1.23V, decide to go for 1.1V which they knew would screw up "everything"? What do those 100mV less have so special that they had to go that way? How practical are those 1.1V?
It's all the same question.