Average & Eres Acquisition Modes:These are the mystery modes, as they both do something, and they are different, but especially eRes (Eres in the menu, but somehow I cannot get used to this form) does not meet my expectations at all.
Common oddities for both average and eRes acquisition modes:
Memory depth is limited to 7/14k when either Average or eRes is selected and the original setting is not restored when acquisition mode is switched back to normal or peak detect. At the same time, Fast acquisition mode becomes ineffective, i.e. we can still toggle between ‘Slow’ and ‘Fast’, but the waveform update rate is limited to <10 either way, and the timebase setting doesn’t have any noticeable effect as well.
The big question here is: WHY?
I’ll discuss this later, as I want to summarize my findings first.
Average mode does not reduce the bandwidth, but significantly slows down the response time for displaying signal variations, as can be demonstrated by the first two screenshots.
First (Normal_BW_50ns_2) shows the test signals in normal acquisition mode.
Ch. 2 is fed with a steady squarewave, 2Vpp, 10MHz.
Ch. 4 is a 500mVrms / 10MHz sinewave, 100% amplitude modulated with 1kHz sine.
The display is in dot mode.

Second (Avg_BW_50ns_2) shows the same signals in averaging mode, with the number of averages of just 4.

What strikes immediately is the strong noise reduction effect, which becomes even more evident in dot mode, as shown here.
Other than that, the squarewave isn’t affected at all, whereas the sinewave on Ch. 4 has just a random amplitude somewhere between zero and two times the unmodulated magnitude, depending on the time of the screen capture. In other words, the display can’t follow the modulation anymore. Instead we get random amplitudes within the modulation range at a screen update rate of about 8.9 per second (but visual changes appear much slower).
Even though this test was done at only 10MHz, just believe me when I say that the bandwidth is not affected by the averaging. A steady state 300MHz sinewave has the same amplitude on the screen, no matter if we use normal, peak-detect or average acquisition mode.
The third attachment (Eres_BW_50ns_2) shows a very similar picture for eRes. The main difference is that the displayed variation of the modulated signal is considerably faster, even though a high resolution enhancement of 2.5bits was set and the screen update rate was still only 8.9 per second.

In contrast to average mode, eRes does affect bandwidth, and as expected, the effect depends on the number of bits set for the resolution enhancement and sample rate. The maximum number of bits of resolution enhancement available depends on the timebase setting, according to the following list:
Timebase Max. number of bits for resolution enhancement
<5ns 1.0
<20ns 1.5
<50ns 2.0
<500ns 2.5
>200ns 3.0 For 2GSa/s, the bandwidth is affected by the resolution enhancement in the following way:
eRes bits Frequency Attenuation
0.5 300 MHz -1 dB
1.0 212 MHz -3 dB
1.0 115 MHz -1 dB
1.5 105 MHz -3 dB
1.5 45 MHz -1 dB
2.0 52 MHz -3 dB
2.0 28 MHz -1 dB
2.5 26 MHz -3 dB
2.5 15 MHz -1 dB
3.0 13.5 MHz -3 dB
3.0 8 MHz -1 dB
CAUTION: The table shown above is only valid for a sample rate of 2GSa/s!Because the memory is limited to just 7/14k, sample rate drops very quickly for slower timebase settings. 2Gsa/s can only be achieved in interleaved mode for timebases of 500ns/div and faster.
There is a linear relationship between sample rate and eRes bandwidth. Based on the table above, half the sample rate (1GSa/s) would mean half the bandwidth, i.e. just 4MHz for -1dB error at 3 bits of resolution enhancement. In the same scenario it would be only 800kHz for a sample rate of 200MSa/s.
Resolution enhancement.Both Average and eRes use some form of averaging, so a resolution enhancement could be expected for both modes. To verify this, I’ve applied just a triangle wave, see attachment ‘Resolution_TestSig’.

It is a 100kHz ramp with a peak to peak amplitude slightly higher than the screen. I then switched to the AMUT (Acquisition Mode under Test

), hit the Stop button and zoomed in vertically ten times (by turning the vertical gain down to 10mV/div). In dot mode (as most of the time) I thought I could just count the dots per division in order to determine the resolution.
As we already know, in normal mode we have 25 dots/division, so when zoomed in 10 times we expect so see 5 dots per 2 divisions, see attachment ‘Norm_Zoom_x10’.

Yes – that’s exactly what we see. So we know there are 200 points per screen height in normal (and peak detect) acquisition modes.
Now let’s take a look at attachment ‘Avg_Zoom_x10’.

Please ignore the blue residual trace that I’ve already covered in a previous post.
Sadly, nothing has changed at all. Despite 64 averages have been set, which could give a theoretical resolution enhancement of 6 bits *), we see absolutely nothing. Oh well, quite obviously, the average mode is just meant for noise reduction and the extra resolution is deliberately thrown away.
But now we take a look at the eRes mode, which carries the words ‘enhanced’ and ‘resolution’ in it’s name already – we sure would see the promised increased resolution of 2.5 bits, that I’ve set for this test, wouldn't we? Unfortunately, attachment ‘Eres_Zoom_x10’ only shows some residual trace, but not the slightest improvement in terms of resolution at all.
Discussion.There are a number of oddities with these two modes and they don’t quite meet the expectations.
I understand that average mode just takes an average of the specified number of subsequent acquisitions. I don’t know what the idea behind eRes is, but it appears obvious to me that it is some form of averaging within a single acquisition, which in turn acts as a lowpass filter. If I were to implement such an acquisition mode, I would make it a moving average over up to at least 64 subsequent samples (for 3 bits of resolution enhancement).
Question #1: Why is memory limited to just 7/14kpts?In Average mode, if it’s a segment-wise averaging, we just need an accumulator for each sample and since the maximum number of averages is 1024 (who the hell will ever use that many?), we need 18 bits per sample. In practice this would be 24 or even 32 bits. Okay, this would eat up 3 to 4 times the original sample memory, consequently the size would decrease to 1/5 at worst, but this is still several Mpts.
If average mode uses a moving average of several acquisitions – well, for an averaging of 1024 this would divide our memory by that number, but we’d still get 14k/28k and the maximum record length should increase accordingly when we select a lower number of averages.
In eRes mode, we are working within the normal acquisition buffer, so virtually no additional memory should be necessary at all.
All in all, it is no fun to work with just 7/14k, as we run into aliasing problems all the time and almost lose the ability to zoom into a waveform – in short, we feel reminded of all the barely usable scopes in the past (and even today) where manufacturers held the view that users generally should make do with tiny sample memories.
Question #2: Why is no hardware acceleration available?Well, there’s lots of calculations with lots of data and if the hardware isn’t designed to support these operations right from the start, it might indeed not be possible to implement that as an afterthought. And yes, I think we really can live without fast waveform update rates in these special cases where we need extremely low noise and or high resolution. It’s just the combination of no hardware support and no memory, that leaves the uneasy feeling of a firmware leftover from some ancient entry-level scope that has had neither the one nor the other in the first place.
Question #3: Why does eRes not enhance the resolution?As stated before, average mode could show a resolution enhancement already, but ok, since it doesn’t promise anything like that, we have to accept when it is just a means for noise reduction.
eRes, on the other hand, carries the promise in its name already and the corresponding parameter is the Resolution Enhancement in bits. What is the use of an acquisition mode, that takes away hardware support, hence speed, takes away all the memory, takes away bandwidth, with no benefit other than a noise reduction for non-periodic waveforms? A DSP lowpass filter with configurable roll-off for the input channels would do a better job for this, as we would immediately know the bandwidth limit instead of having to calculate it based on the sample rate and number of bits of alleged resolution enhancement.
I’m sure, the extra resolution is there originally, but somehow gets lost at some point on its way to the screen.
Question #4: Why does memory depth stay at 7/14k when we switch back to normal mode?If (sad enough) the memory is limited to 7/14k for the averaging and eRes modes, it would be very much appreciated if it would return to its original value when we switch back to normal or peak detect. As it is now, we have set the memory to say 28Mpts and then switch to average mode for a short period of time. When we go back, the memory size stays at 14k and we need to access the memory size menu once again to wind it up, which is rather annoying.
Conclusion:As it is now, Average Mode could be useful if we want to remove noise or calm down an unstable signal. It would be much more useful, if we had a little more memory available though.
As far as the eRes mode is concerned, I’m sorry to admit that in the way it is implemented right now I cannot think of too many useful application for it, other than noise reduction for non-periodic waveforms, where the sample rate dependant roll-off has to be estimated separately for each combination of ‘resolution enhancement’ and timebase setting. The latter once again due to the lack of memory.
EDIT: Clarification and additional info on eRes bandwidth limiting.
EDIT2: Typo correction, 800kHz instead of 500kHz for 200MSa/s and 3 bits eRes.
EDIT3:
*) Theoretical, two averages would increase the resolution by two as well – but only under very special conditions, for a ramp signal with well defined amplitude and frequency synchronous to the sample rate superimposed on the input signal.
In practice, we have just random noise available, so in order to get a sufficient statistical probability for the averaging result actually being close to the actual intermediate value, for a resolution enhancement of factor N, we need to have N² averages for boxcar and moving average filters and it could be even more for FIR filters optimized for a particular response characteristics.
We also need this many averages in order to increase the signal to noise ratio to a point, where the additional resolution can be exploited in practice.