Products > Test Equipment
A High-Performance Open Source Oscilloscope: development log & future ideas
2N3055:
Digital scope with a screen (distinction from digitizer that samples data inside data acquisition system) need to serve two functions:
- emulate behaviour of a CRT oscilloscope on a screen
- function as a digitizer in a background, so that all data that it captured is sampled properly and doesn't contain any nonsense in mathematical way.
First point is well served with decimating to screen with peak detect.
Second one is served well with large buffer that ensures highest sampling most of the time, and by downsampling by filtering to ensure that there are no aliasing artefacts in data that was sampled with lower sample rate.
In which case there must be obvious warning that at his timebase you're working with limited bandwidth. And also a way to disable filtering to have simple decimation by sample discarding. Because that is raw data, people expect sometimes.
There is no simple, single solution for all.
For instance, RMS measurements should be performed of full speed sample data, to take into account all high energy content..
Risetime needs fastest edge info it can have.. Etc..
Current scopes do all kinds of compromises to cater to their optimization targets...
JohnG:
--- Quote from: tom66 on December 18, 2020, 08:18:15 am ---Makes sense - but, in that case, why not omit the input filter altogether and allow the user to cautiously use their instrument up to Nyquist? All filters risk eliminating signals that you intend to look at - part of operating a scope is understanding approximately what you expect to appear on the screen before you even probe it.
--- End quote ---
Because most of the time, for general purpose, you will want the antialias filter in place. The ability to bypass it might be nice, though.
Cheers,
John
Marco:
Simply rendering all samples with intensity shading is much better than decimation for showing the shape of a modulated signal.
tom66:
That makes sense. So, as I understand it, the modes that need to be supported are:
Normal - Applies a downsampling filter* when sampling rate is under the normal ADC rate, otherwise, does not filter
Decimate - Applies only decimation, i.e. dropping samples when sampling rate is under the normal ADC rate, otherwise identical to Normal
ERES - Averages consecutive samples to increase sample resolution up to 16 bits depending on memory availability
Average - Averages consecutive waveforms to compute one output waveform; otherwise behaves like Normal mode in terms of decimation/downsampling
Peak detect - Records a min and max during decimation and stores these instead of individual samples. Halves available memory.
*Exact design of this filter to be worked out (quite possibly CIC given the simplicity?)
Some consideration needs to be made in regards to supporting 12-bit/14-bit modes but they would require external filters unless aliasing is permitted in these modes.
Note that the downsampling would be needed once the timebase exceeds a total timespan of ~240ms, or about 20ms/div, on the current prototype with ~240Mpts memory available. With 4x the memory, downsampling is still needed once beyond 50ms/div. Hard to get around the tremendous amount of memory that just sampling at 1GSa/s requires.
In all modes, certain auto measurements can work without acquiring to memory and therefore can work at the full sample rate. These are:
- The frequency counter
- Vmax, Vmin, Vp-p
- Vrms
- Vavg
though bounding by cycles (e.g. Vrms over a cycle of a wave) does require memory acquisition and therefore would be affected by sampling modes.
Have I missed anything?
gf:
--- Quote from: tom66 on December 18, 2020, 05:31:53 pm ---Normal - Applies a downsampling filter* when sampling rate is under the normal ADC rate, otherwise, does not filter
Decimate - Applies only decimation, i.e. dropping samples when sampling rate is under the normal ADC rate, otherwise identical to Normal
ERES - Averages consecutive samples to increase sample resolution up to 16 bits depending on memory availability
Average - Averages consecutive waveforms to compute one output waveform; otherwise behaves like Normal mode in terms of decimation/downsampling
Peak detect - Records a min and max during decimation and stores these instead of individual samples. Halves available memory.
--- End quote ---
I don't see a principal difference between "Normal" and "ERES". Both decimate with prior filtering. Variables are the kind and order of the filter, and number of bits (>= # ADC bits) per sample being stored (which has an impact on memory consumption).
I would consider "Average" not as separate mode, but rather as optional step in the acquisition pipeline, which can be combined with either Normal, Decimate or ERES (it does not make sense in conjunction with peak detect, of course). Since averaging increases the dynamic range as well, one may also consider to store the data with more bits per sample than delivered by the previous stage in the acquisition pipeline.
EDIT:
--- Quote ---Note that the downsampling would be needed once the timebase exceeds a total timespan of ~240ms, or about 20ms/div, on the current prototype with ~240Mpts memory available. With 4x the memory, downsampling is still needed once beyond 50ms/div. Hard to get around the tremendous amount of memory that just sampling at 1GSa/s requires.
--- End quote ---
There needs to be some default, but IMO the user should still be able to control the trade-offs between acquisition mode, sampling rate (of the stored samples), record size, and number of records that can be stored (within the feasible limits).
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version