Author Topic: Compensating bandwidth with software  (Read 6267 times)

0 Members and 1 Guest are viewing this topic.

Offline jtruc34Topic starter

  • Regular Contributor
  • *
  • Posts: 62
  • Country: ch
Compensating bandwidth with software
« on: April 08, 2019, 09:17:40 pm »
I don't know if oscilloscope already do this or not.

I'm assuming that attenuation is totally predictable and frequency dependant.

Why not take the DFT of the signal, multiply it by a know function of frequency (which would be the inverse of the attenuation function), and then take the inverse DFT and display the signal?

As long as the scope record a usable signal, this could totally compensate the problem, if the situation is as simple as in my head. But it probably isn't, as it happens, I'm asking why.
 

Offline jtruc34Topic starter

  • Regular Contributor
  • *
  • Posts: 62
  • Country: ch
Re: Compensating bandwidth with software
« Reply #1 on: April 08, 2019, 10:05:46 pm »
Yes, I thought of the resolution, it is obviously a problem. But is that something that oscilloscopes do?
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Compensating bandwidth with software
« Reply #2 on: April 08, 2019, 10:15:31 pm »
Some of the higher end oscilloscopes do have DSPs in the signal chains to improve flatness. Keysight has some information about this on their website.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #3 on: April 09, 2019, 03:39:53 am »
High end DSOs have done equalization digitally for a while but less expensive DSOs have neither the power nor digital performance budget for it except at low acquisition rates.
 

Offline loulou31

  • Newbie
  • Posts: 4
  • Country: fr
Re: Compensating bandwidth with software
« Reply #4 on: April 10, 2019, 04:26:50 pm »
Yes you Can compensate a little the response of the anti-aliasing front end filter however you cannot increase thé bandwidth by software because you cannot create lost informations due to sampling limitation. To understand on thé Time domain : if you want to measure a rising time if you have no enought samples you cannot know what is real shape if you have only one samples when signal is low ans then when is high.   
Jean-Louis
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #5 on: April 10, 2019, 06:38:09 pm »
Yes you Can compensate a little the response of the anti-aliasing front end filter

Show me an oscilloscope with an anti-aliasing front end filter and I will show you a piece of junk.
 

Offline jtruc34Topic starter

  • Regular Contributor
  • *
  • Posts: 62
  • Country: ch
Re: Compensating bandwidth with software
« Reply #6 on: April 10, 2019, 07:39:03 pm »
I'm a bit surprised to hear that cheaper oscilloscopes don't do that, because I thought they already did in the opposite direction, for limiting the bandwidth on cheaper models. I thought so because I saw one can hack at the software level the cheapest ones and increase the bandwith.

Isn't it what they do?

And yes, I know it is simpler to attenuate than amplify high frequency because you don't lose resolution.
 

Offline jtruc34Topic starter

  • Regular Contributor
  • *
  • Posts: 62
  • Country: ch
Re: Compensating bandwidth with software
« Reply #7 on: April 10, 2019, 08:38:12 pm »
Ok, that makes more sense.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: Compensating bandwidth with software
« Reply #8 on: April 10, 2019, 10:54:03 pm »
I'm a bit surprised to hear that cheaper oscilloscopes don't do that, because I thought they already did in the opposite direction, for limiting the bandwidth on cheaper models. I thought so because I saw one can hack at the software level the cheapest ones and increase the bandwith.

Isn't it what they do?

And yes, I know it is simpler to attenuate than amplify high frequency because you don't lose resolution.
The computational resources required to process an accurate correction filter are large and introduce other limitations, they are substantially more complex than the anti-aliasing or bandwidth limiting filters.

https://download.tek.com/document/55W_17589_2.pdf
http://cdn.teledynelecroy.com/files/whitepapers/technologies-for-very-high-bandwidth-real-time-oscilloscopes.pdf

Yes you Can compensate a little the response of the anti-aliasing front end filter

Show me an oscilloscope with an anti-aliasing front end filter and I will show you a piece of junk.
Feeling like being a pedant? Well, they are called alias filters:
https://literature.cdn.keysight.com/litweb/pdf/5992-2259EN.pdf
Or do you want to argue about what constitutes the "front end" ?
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #9 on: April 10, 2019, 11:03:39 pm »
I'm a bit surprised to hear that cheaper oscilloscopes don't do that, because I thought they already did in the opposite direction, for limiting the bandwidth on cheaper models. I thought so because I saw one can hack at the software level the cheapest ones and increase the bandwith.

Isn't it what they do?

Oscilloscopes typically have switchable fixed frequency low pass filters.  20 and 100 MHz are the most common but lower values like 10 and 5 MHz will be included to support higher vertical sensitivities.  Rigol on the 1000Z series included 50 and 70 MHz sections which when combined produced 20 MHz and a 70 MHz grade to sell.

These are *not* anti-aliasing filters which would be pretty stupid in an oscilloscope where input bandwidth must remain constant at different sample rates.  Even when aliasing, the input bandwidth remains the same; bandwidth has nothing to do with sample rate.

Quote
And yes, I know it is simpler to attenuate than amplify high frequency because you don't lose resolution.

It is actually easier to attenuate or amplify *low* frequencies which Tektronix got a patent on when they developed their 1 GHz analog oscilloscope.  High frequency equalization runs into physical construction limits.

The really cheap ones (<$1k) use a software controlled PIN diode to control capacitance in anti aliasing filter, thus controlling the BW.
No digital magic, just plain old PIN diodes.

I have never seen PIN diodes used for this in an oscilloscope but switching diodes, schottky diodes, relays, and transistors in at least two configurations work well.  The signal levels are just not amendable to PIN diode use.  The current switched diode bridge is probably the highest performance configuration but the Rigol 1000Z series gets by with bipolar transistors in 2-quadrant operation.
 

Offline JDubU

  • Frequent Contributor
  • **
  • Posts: 441
  • Country: us
Re: Compensating bandwidth with software
« Reply #10 on: April 11, 2019, 02:08:02 am »
The Rigol DS2000 series scopes use a diff amp IC (LMH6518) on each channel input that has both digitally controlled gain and bandwidth.

Datasheet:  http://www.ti.com/lit/ds/symlink/lmh6518.pdf
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #11 on: April 11, 2019, 04:27:05 pm »
Show me an oscilloscope with an anti-aliasing front end filter and I will show you a piece of junk.

Feeling like being a pedant? Well, they are called alias filters:
https://literature.cdn.keysight.com/litweb/pdf/5992-2259EN.pdf
Or do you want to argue about what constitutes the "front end" ?

Keysight's marketing may be calling them that but what are being described are not anti-aliasing filters.

This provides sufficient attenuation above the Nyquist frequency to prevent significant alias products from appearing when running at maximum ADC rates.

Not usually true and absolutely not true for non-real time sampling DSOs (1) which includes most of them.  A lot more attenuation at Nyquist would be necessary then the 8 and 9 dB they show in their example.  Traditional Gaussian roll-off oscilloscopes have a noise bandwidth of 1.6 times their -3 dB bandwidth which is a very reasonable compromise.

If you want to see aliasing, look at a fast edge and watch the sin(x)/x reconstruction wobble because the mixing products between the edge and sampling clock are higher than the Nyquist frequency.  Nobody is going to pay for an oscilloscope with such an absurdly low bandwidth to avoid that.

And then they admit these are not anti-aliasing filters:

However, most digitizers and oscilloscopes support a lower ADC sample rate by using decimation methods (discarding sample points to achieve a lower effective sample rate), which results in a lower Nyquist frequency FN. To prevent aliasing products, the user must ensure there is no signal energy above the effective Nyquist frequency. This is true for both digitizers and oscilloscopes.

So which is it?  Do they "provide sufficient attenuation" or must "the user must ensure there is no signal energy above the effective Nyquist frequency"?

(1) "Real time" in this case refers to DSOs and digitizers here where sample rate is independent of the number of channels in operation and as high as possible simply because that is what Tektronix used to call l them to distinguish them from more general purpose instruments.  These instruments are designed to have the highest possible sample rate for single shot applications at any cost and the exclusion of other features so they are somewhat specialized.  These replaced "transient digitizers" which were even more specialized.  Typical applications include high energy physics and they are very export controlled because of their nuclear weapon applications.

 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #12 on: April 11, 2019, 04:54:46 pm »
The Rigol DS2000 series scopes use a diff amp IC (LMH6518) on each channel input that has both digitally controlled gain and bandwidth.

Datasheet:  http://www.ti.com/lit/ds/symlink/lmh6518.pdf

National (now TI) had these "instant oscilloscope" ASICs for a while (1) and I suspect they came out of work they did for Tektronix.  Steve Roach of Tektronix wrote some things which make me suspect he was either involved or witnessed their development.

I think you can get a pretty good idea of their internal structure by studying the later Tektronix custom ICs which are documented.  The difference now is that National was able to use a fast complementary bipolar process instead of the older NPN only process.  This is also what made integrated current feedback operational amplifiers possible.

(1) LMH is a National prefix and I remember some of these or similar parts in their later databooks.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Compensating bandwidth with software
« Reply #13 on: April 12, 2019, 12:26:51 pm »
However, most digitizers and oscilloscopes support a lower ADC sample rate by using decimation methods (discarding sample points to achieve a lower effective sample rate), which results in a lower Nyquist frequency FN. To prevent aliasing products, the user must ensure there is no signal energy above the effective Nyquist frequency. This is true for both digitizers and oscilloscopes.

So which is it?  Do they "provide sufficient attenuation" or must "the user must ensure there is no signal energy above the effective Nyquist frequency"?
Both. At some point a DSO has to lower the samplerate to get the required memory depth. In 'sample mode' you'll get aliasing. That is a fact of life (and the reason peak-detect is so handy to have). At the maximum samplerate the anti-alising filter will keep most of the nasty stuff out. The nice thing about the anti-aliasing filter is that the response wraps around just like the input signal frequencies over Nyquist. Add the bandwidth limit of the source combined with the probing solution to that and you'll see aliasing isn't a problem in most practical situations.
« Last Edit: April 12, 2019, 04:38:36 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #14 on: April 12, 2019, 08:32:59 pm »
However, most digitizers and oscilloscopes support a lower ADC sample rate by using decimation methods (discarding sample points to achieve a lower effective sample rate), which results in a lower Nyquist frequency FN. To prevent aliasing products, the user must ensure there is no signal energy above the effective Nyquist frequency. This is true for both digitizers and oscilloscopes.

So which is it?  Do they "provide sufficient attenuation" or must "the user must ensure there is no signal energy above the effective Nyquist frequency"?

Both. At some point a DSO has to lower the samplerate to get the required memory depth. In 'sample mode' you'll get aliasing. That is a fact of life (and the reason peak-detect is so handy to have). At the maximum samplerate the anti-alising filter will keep most of the nasty stuff out. The nice thing about the anti-aliasing filter is that the response wraps around just like the input signal frequencies over Nyquist. Add the bandwidth limit of the source combined with the probing solution to that and you'll see aliasing isn't a problem in most practical situations.

Aliasing was not a problem before so what changed?  What is it about modern DSOs which requires anti-aliasing filtering (other than the lack of ETS)?

Oscilloscopes specifically designed with maximally flat response go back to at least 1973 where it was an option on the Tektronix 7704A mainframe.  At higher frequencies, it is more difficult (expensive or impossible) to maintain a Gaussian response so they design for "maximally flat" instead but that does not make the response a result of anti-alias filtering. (1)

One of the Tektronix application notes makes a similar argument about averaging the *aliased* results from a fast edge to get results similar to but not as accurate to what ETS would have provided in the first place without any aliasing.  This is more marketing at work.

Nothing prevents implementing anti-alias filtering during decimation except the costs.

(1) There are some old oscilloscopes which arguably get the best of both.  Bandwidth limits usually rely on a single pole Bessel roll-off but some old instruments implemented much more aggressive bandwidth limiters.  The 11A33 for example implements switchable 20 and 100 MHz 4 pole Bessel filters for 24dB/octave rolloff instead of 6dB/octave.  Some modern DSOs do the same thing through DSP but after digitizing.  This is not always a boon; 20 MHz noise measurements made with 1 pole and 4 pole filters are not the same.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Compensating bandwidth with software
« Reply #15 on: April 12, 2019, 08:49:33 pm »
However, most digitizers and oscilloscopes support a lower ADC sample rate by using decimation methods (discarding sample points to achieve a lower effective sample rate), which results in a lower Nyquist frequency FN. To prevent aliasing products, the user must ensure there is no signal energy above the effective Nyquist frequency. This is true for both digitizers and oscilloscopes.

So which is it?  Do they "provide sufficient attenuation" or must "the user must ensure there is no signal energy above the effective Nyquist frequency"?
Both. At some point a DSO has to lower the samplerate to get the required memory depth. In 'sample mode' you'll get aliasing. That is a fact of life (and the reason peak-detect is so handy to have). At the maximum samplerate the anti-alising filter will keep most of the nasty stuff out. The nice thing about the anti-aliasing filter is that the response wraps around just like the input signal frequencies over Nyquist. Add the bandwidth limit of the source combined with the probing solution to that and you'll see aliasing isn't a problem in most practical situations.

Aliasing was not a problem before so what changed?  What is it about modern DSOs which requires anti-aliasing filtering (other than the lack of ETS)?
In every digital sampling system you'll need anti-aliasing filters. The older DSOs either didn't care (Tek2230) or had such a high sampling rate they could use a Gaussian roll-off frequency response and still had enough headroom at their maximum samplerate (TDS200 series). However if you want to get the maximum bandwidth versus samplerate then the golden spot is to have the bandwidth at fs/2.5 where sin x/x reconstruction still works well. That gives enough headroom to have a reasonable anti-aliasing filter without too much 'funny' side effects. The ENOB on most scopes is likely 7 bits (or even less) so 42dB of attenuation at fs/1.7 (*) is enough.

* Say you have a 1Gs/s scope. fs/2.5=400MHz bandwidth. FNyquist=500MHz. Because the anti aliasing filters wraps around the nyquist frequency you'll need at least 42dB of dampening at 600MHz (in the analog domain). 600MHz= approx. fs/1,7
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #16 on: April 12, 2019, 10:10:34 pm »
In every digital sampling system you'll need anti-aliasing filters.

That is completely false and doubly so because there is nothing special about digital versus analog sampling systems.

Sample rate has nothing to do with -3dB bandwidth.  Some old examples of this include both analog and digital sampling oscilloscopes, sampling frequency counters, and especially sampling RF voltmeters which make accurate RMS measurements into the GHz range using sampling rates of 10s of kHz.  These instruments have zero anti-aliasing and input bandwidth is determined by sampling gate width, any circuits before the sampling gate (usually none), and ultimately physical construction.

Digital storage oscilloscopes can make the same measurements these instruments do in the same way within the limits of their input circuits irrespective of their sampling rate.  (1)  This reflects the chief difference between "sampling" and normal instruments; sampling instruments have high bandwidths because they move their sampling process to a point earlier in the signal chain.

Quote
The older DSOs either didn't care (Tek2230) or had such a high sampling rate they could use a Gaussian roll-off frequency response and still had enough headroom at their maximum samplerate (TDS200 series).

Did not care pretty much sums it up.  Anti-aliasing filters are not required for oscilloscope applications and actually conflict with them.  These old instruments did not include anti-aliasing filters because they were not needed and would detract from their performance and intended applications.  Instead of adding anti-aliasing, functions like peak detection, histogram acquisition, and huge record lengths (LeCroy) were added.

Quote
However if you want to get the maximum bandwidth versus samplerate then the golden spot is to have the bandwidth at fs/2.5 where sin x/x reconstruction still works well. That gives enough headroom to have a reasonable anti-aliasing filter without too much 'funny' side effects. The ENOB on most scopes is likely 7 bits (or even less) so 42dB of attenuation at fs/1.7 (*) is enough.

Fsample/2.5 is an arbitrary criteria.  At various times it has been Fsample/4, Fsample/5, or Fsample/10.  Fsample/2.5 does *not* give nearly enough headroom for any anti-aliasing filter with acceptable transient response because the transition band between the -3dB bandwidth and Nyquist frequency is just too small even for only 8-bit resolution.  It can work for audio but not for a time domain instrument and even audio applications now use much higher oversampling ratios with large transition bands.  (2)

Quote
* Say you have a 1Gs/s scope. fs/2.5=400MHz bandwidth. FNyquist=500MHz. Because the anti aliasing filters wraps around the nyquist frequency you'll need at least 42dB of dampening at 600MHz (in the analog domain). 600MHz= approx. fs/1,7

I wish you good luck making that filter.  The practically unknown example I gave was 24dB/octave and that was only at 20 and 100 MHz.  There is a reason elliptical filters typically used for anti-aliasing are absolutely *not* used in time domain applications.

The original question involved compensating the bandwidth in software and high end DSOs, especially high bandwidth ones, do exactly this.  I think Keysight has discussed this issue in depth.  The frequency and phase response is tested during calibration and the correction factors are used to produce what I assume is a FIR filter to correct the response.

But no software could correct for the response of the anti-aliasing filter you are suggesting.  The poor transient response would overload the digitizer on peaks and the response would shift too much with time and temperature.  Even DSOs which no extra filtering have this later problem which is why bandwidth and rise time specifications are qualified to a specific temperature range.  True anti-alias filters would make this much worse.

(1) DSOs which operate on their processed display record like Rigol cannot make these measurements which is actually a step back from analog oscilloscopes which can.  Unless special care is taken, the processing to produce the display record destroys the histogram of the original signal.

(2) Many disagree that this works or ever worked for audio but the question has become irrelevant since this is not done anymore.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: Compensating bandwidth with software
« Reply #17 on: April 13, 2019, 08:41:31 am »
However, most digitizers and oscilloscopes support a lower ADC sample rate by using decimation methods (discarding sample points to achieve a lower effective sample rate), which results in a lower Nyquist frequency FN. To prevent aliasing products, the user must ensure there is no signal energy above the effective Nyquist frequency. This is true for both digitizers and oscilloscopes.

So which is it?  Do they "provide sufficient attenuation" or must "the user must ensure there is no signal energy above the effective Nyquist frequency"?

(1) "Real time" in this case refers to DSOs and digitizers here where sample rate is independent of the number of channels in operation and as high as possible simply because that is what Tektronix used to call l them to distinguish them from more general purpose instruments.  These instruments are designed to have the highest possible sample rate for single shot applications at any cost and the exclusion of other features so they are somewhat specialized.  These replaced "transient digitizers" which were even more specialized.  Typical applications include high energy physics and they are very export controlled because of their nuclear weapon applications.
You would like to redefine "real time" too? Have fun with that. There sure are a lot of people on this board who "know" how scopes work and will argue all day long about things they actually don't understand and it seems you're determined to join in.

There most certainly is an anti aliasing filter at the front end, before the ADC. Some more details going at length to discuss the relative frequencies and attenuation here:
http://www.measurement.net.au/resources/4D/1101/Other/Waveform%20Ghost%20Busters.pdf

Thats the front end anti-aliasing taken care of, because the ADCs on most modern scopes runs at a constant rate. That constant rate data is then passed through digital processing stages for anti-aliasing and reduction for lower sample storage rates as needed. There are scopes with poor aliasing characteristics, and those that don't appear to exhibit it at all. Just comes down to designing appropriate filters in the right places.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #18 on: April 13, 2019, 07:12:20 pm »
You would like to redefine "real time" too? Have fun with that. There sure are a lot of people on this board who "know" how scopes work and will argue all day long about things they actually don't understand and it seems you're determined to join in.

I was very careful in my post to point out what I meant by "real time".  Tektronix used the term to distinguish DSOs with high and invariant sample rates.  These DSOs took the place of the earlier "transient digitizers".

Quote
There most certainly is an anti aliasing filter at the front end, before the ADC. Some more details going at length to discuss the relative frequencies and attenuation here:
http://www.measurement.net.au/resources/4D/1101/Other/Waveform%20Ghost%20Busters.pdf

Thats the front end anti-aliasing taken care of, because the ADCs on most modern scopes runs at a constant rate. That constant rate data is then passed through digital processing stages for anti-aliasing and reduction for lower sample storage rates as needed. There are scopes with poor aliasing characteristics, and those that don't appear to exhibit it at all. Just comes down to designing appropriate filters in the right places.

The ADCs on most *old* DSOs run at a constant rate also.  Variable sampling rates quickly cause unacceptable performance compromises making decimation a superior alternative since it is trivial to do.  (1) This is especially the case when decimation features like peak detection, high resolution, or histograms are included because the sample rate should be as high as possible and record length is irrelevant for these.

Following the ADC with a real time DSP stage for filtering (including bandwidth enhancement) and decimation beyond trivial peak detection or boxcar averaging for high resolution mode is only found on high end instruments because the cost of operating at the real time sample rate is considerable. (4)

The document you linked gives three examples.  The first one with a Gaussian response close to the Nyquist frequency is by far the most common.  The second example where a Gaussian response is enforced further from the Nyquist frequency is found in DSOs designed for market segmentation and has nothing to do with anti-aliasing although this may represent a real cost because of extra calibration time or poorer yields.  The third example exists because at the highest bandwidths, a Gaussian response is just not economically feasible and if you have to have a non-Gaussian response, it might as well be maximally flat and bandwidth enhancement through DSP to correct the input filter response is not such a large part of the cost when you have to include it anyway.  Calibrating the transient response in the analog domain was difficult enough at 1 GHz to yield new patents; at higher bandwidths, there are better ways. (2)

Even if you want to consider that last case to be an anti-alias filter, it does nothing for the effects from aliasing created by mixing between the signal and sample clock in the ADC itself.  This is a considerable source in error which can only be removed using higher sampling rates and ADCs with better linearity.  HP/Agilent/Keysight have poked fun at Tektronix over this very issue and given the choice, they stressed higher sample rates and better linearity over a useless anti-alias filter. (3)

(1) Latch the output of the ADC at a fraction of the sample clock.  Things get a little weird with CCD based digitizers but they latch the output of the sampler at a fraction of the sampling clock which is the same thing.

(2) I wonder though which early 2 GHz or higher DSOs did it.  I know the calibration procedure was arduous enough that it had to be automated.  I do not think the early DPO style DSOs could have done it in DSP which leaves digital calibration in the analog domain which goes back to at least analog oscilloscopes in 1984 which did this.  Low waveform acquisition rate DSOs could have done it through DSP at any time but which did?

(3) This is not the document I was thinking of but it is closely related.  Keysight has changed their name so much that a lot of their old documents have become unavailable.

(4) Last year I studied exactly what was possible on commonly available hardware and concluded that DPO with short record lengths is feasible and still the way to go with sample rate and record length limited by the FPGA's internal memory; external memory is just too slow.  R&S is doing something like this in a custom ASIC but I do not know if that includes bandwidth enhancement or how they are handling the memory performance requirements.
« Last Edit: April 13, 2019, 07:18:47 pm by David Hess »
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #19 on: April 13, 2019, 10:38:39 pm »
Something else which I forgot to include in my post is that we cannot rely on the published maximally flat response curves to divine the passband response before the digitizer.  On these high performance DSOs, the passband response is corrected through DSP to produce the result shown which is exactly what jtruc34 originally asked about.

Lower frequency common DSOs may be doing that also to a lessor extent but it is difficult to tell.  We know calibration is individual to each unit and controlled by the processor but like the early example I mentioned, this may simply be done by adjusting equalization before the digitizer.  The Rigol DS1000Z schematics reverse engineered by Dave show exactly this in one and possibly another spot.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: Compensating bandwidth with software
« Reply #20 on: April 14, 2019, 02:42:57 am »
Fake self references and more waffle?
Yes you Can compensate a little the response of the anti-aliasing front end filter

Show me an oscilloscope with an anti-aliasing front end filter and I will show you a piece of junk.
Scopes do have anti aliasing filters, their effectiveness varies significantly. I'd suggest good (typical use, "normal" day to day use) scopes have effective anti alias filters. You can wander off into specialty equivalent time sampling and downconversion all you like, but thats clearly not what the OP was asking about.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Compensating bandwidth with software
« Reply #21 on: April 14, 2019, 01:45:12 pm »
I don't want to go over the posts, but the statements made by some about anti-aliasing filters in DSOs is spectacularly incorrect.

A low pass analog filter between the input and the ADC is absolutely essential. You need -6dB per bit of ADC resolution at Nyquist to avoid aliasing.  To avoid having to build high order analog filters, a common practice is to implement a filter that is -3 dB per bit at Nyquist and then to apply a brickwall low pass filter in the DSP chain following the ADC using the same corner frequency.  This is the reason for the wretched step responses one often sees.  My Instek MSO2204EA has 5% overshoot.  The Keysight MSOX3104T I bought and then returned had 7% overshoot. 

I had a Rohde & Schwarz RTM3104 on demo for about a week.  When it arrived it had <3% overshoot on a step, meeting the traditional  standard set by Tektronix.  But it had a completely unusable FFT (I'm in NA).  R&S suggested I update the FW from 1.100 to 1.300.  I did and the overshoot went from 3% to 10%!!!

Sadly, reverting back to 1.100 did not restore the <3% overshoot.  It remained at 10%.

The problems posed by component tolerances make it very desirable to use a Wiener least squares FIR filter implemented in the FPGA to correct for the deviations from design due to tolerance spreads.  R&S obviously do this as the only explanation for my experience with the RTM3K is that the update overwrote the factory calibration data for the AFE.  I would expect that most DSO makers do the same, but don't destroy their scopes doing FW updates.

The BW limits selectable in a DSO UI are implemented via DSP.  A casual inspection of the AFE of a DSO will show that there aren't any filters being switched.  It would be very desirable if this were the case, but it is not.  My LeCroy DDA-125/LC684DLX will sample at 2-8 GSa/s depending upon the number of channels in use.  It has a claimed 1.5 GHz BW which is substantiated by a <250 ps rise time for a step, but with 20% overshoot.  Note that in 4 channel mode sampling at 2 GSa/s, the Nyquist is 1 GHz.  So everything above 500 MHz is aliased.  You can get away with it so long as you remain in the time domain.  Sampling scopes do this.  My Tek 11801 samples at 100 KSa/s but with an SD-32 sampling head has a 50 GHz BW.  My SD-26 heads only go to 20 GHz,

To return to the OP's question, you can correct the DSO for  the anti-alias filter response.  This should already be done in  a well made instrument.  However, if you attempt to increase the BW by applying gain above the low pass corner frequency you will lose accuracy and you will distort the time domain response.  If you make the BW flat from DC to Fc the step response will get significant ringing on the step response.  It is not possible to have a frequency domain response which is flat across the entire BW and not have severe ringing in the time domain.   This is all treated in any introductory discussion of DSP.

Have Fun!
Reg
 
The following users thanked this post: Someone, exe

Online gf

  • Super Contributor
  • ***
  • Posts: 1182
  • Country: de
Re: Compensating bandwidth with software
« Reply #22 on: April 14, 2019, 03:22:31 pm »
You need -6dB per bit of ADC resolution at Nyquist to avoid aliasing.

What about living with aliasing instead of avoiding it? For some use cases (e.g. assessment of square wave pulses) I don't see the sense of degrading the original signal (with an AA filter), just in order to make an exact sinc-reconstruction of this degraded signal from the samples possible, while the original signal still cannot be reconstructed unambiguously.

Quote
To avoid having to build high order analog filters, a common practice is to implement a filter that is -3 dB per bit at Nyquist and then to apply a brickwall low pass filter in the DSP chain following the ADC using the same corner frequency.

Can you please explain how a digital low-pass filter can help to reduce aliasing after sampling (unless the signal is oversampled, digitally AA-filtered and then decimated - which is not an option if the sampling rate is already at the max. limit). My understanding is that the frequency bands are already folded down once the signal is sampled?
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Compensating bandwidth with software
« Reply #23 on: April 14, 2019, 03:59:24 pm »
To avoid having to build high order analog filters, a common practice is to implement a filter that is -3 dB per bit at Nyquist and then to apply a brickwall low pass filter in the DSP chain following the ADC using the same corner frequency.

Can you please explain how a digital low-pass filter can help to reduce aliasing after sampling (unless the signal is oversampled, digitally AA-filtered and then decimated - which is not an option if the sampling rate is already at the max. limit). My understanding is that the frequency bands are already folded down once the signal is sampled?
The high frequencies will fold around the fs/2 point. So if you filter digitally between the bandwidth and fs/2 then you'll still filter away the unwanted frequencies. The only requirement is that the unwanted frequencies are attenuated by at least bits*6dB at fs/2 + (fs/2 -BW).

edit: fixed the quote!
« Last Edit: April 14, 2019, 06:50:08 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Compensating bandwidth with software
« Reply #24 on: April 14, 2019, 06:20:06 pm »
@nctnico munged the quote, but what he wrote is correct.  Anti-aliasing is covered in detail in the first chapter of any introduction to DSP.

If you are not too severely aliased you can get away with sampling without an anti-alias filter.  But just like the wagon wheels turning backwards, you can make a waveform backwards.

A sampling scope avoids the aliasing problem by always using the external trigger signal and sampling at j*dT for j-1 to N.
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1182
  • Country: de
Re: Compensating bandwidth with software
« Reply #25 on: April 14, 2019, 09:09:56 pm »
The high frequencies will fold around the fs/2 point. So if you filter digitally between the bandwidth and fs/2 then you'll still filter away the unwanted frequencies. The only requirement is that the unwanted frequencies are attenuated by at least bits*6dB at fs/2 + (fs/2 -BW).

Thanks. As long as BW is sufficiently below fs/2, this is indeed an option.
(I was rather assuming that amlost the whole range up to fs/2 were supposed to be utilized.)
 

Online gf

  • Super Contributor
  • ***
  • Posts: 1182
  • Country: de
Re: Compensating bandwidth with software
« Reply #26 on: April 14, 2019, 11:30:02 pm »
If you are not too severely aliased you can get away with sampling without an anti-alias filter. But just like the wagon wheels turning backwards, you can make a waveform backwards.

At least I do not wonder if I feed e.g. a 10.2 MHz signal into my scope and see this signal showing up as ~8.5 Hz signal when the timebase is set to a slow value of 100ms/div (-> 2.5 kSPS).

When it arrived it had <3% overshoot on a step, meeting the traditional  standard set by Tektronix.

In fact I find it pretty annoying if the waveform on the scope suggests the presence of overshoot, although the signal does not suffer from overshoot at all. W/o AA filter I can switch to the points display and check the non-interpolated raw samples. I the overshoot is however introduced by an AA filter prior to sampling, the points display won't help either.

Do you happen to know how much bandwidth below fs/2 needs to be renounced (using an optimal filter) in order to achieve < 3% overshoot in the time domain, while still providing sufficient AA cut-off for 8 bits resolution?
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Compensating bandwidth with software
« Reply #27 on: April 15, 2019, 02:16:04 pm »
Fc needs to be Fs/4 to get an optimal step response.  My testing using the filter option on the MSOX3104T showed that if I applied a low pass filter with 750 MHz corner I got a clean step.  But there was no documentation of what type of filter was being used.  And there are other issues I've not yet resolved to my satisfaction

The appropriate filter is a maximally flat delay filter.  The proper way to do it is to apply an analog anti-alias filter to suppress the input -6 dB per bit at Fn.  So with Fc = Fs/4 it requires one pole per bit.  Then follow that with a Wiener least squared error FIR filter or a least summed absolute amplitude filter using the FPGA to clean up any errors in the analog filter response and give the optimal rise time and overshoot.

There is a valid argument for allowing the user to choose between minimum rise time and minimum overshoot with a user threshold for the overshoot.  My LeCroy DDA-125 has a minimum rise time of <250 ps.  But at the cost of a 20% overshoot.  This was acceptable because rise time was the measurement of greatest importance testing disk drives.  Not very good for a general purpose DSO though.

I'm so furious over the step responses I've taken on a 2-3 year project to fix the problem.  To date I've spent close to $3k getting set up to do the work.
 
The following users thanked this post: gf

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #28 on: April 16, 2019, 01:07:39 am »
Both Tektronix and Keysight have written application notes explaining away the compromised pulse response of their DSOs that have a maximally flat response.  The argument comes down to more accurate results for measurements like rise and fall times and ETS like results can be achieved with averaging to remove the residual aliasing.  Of course if you were going to average, then ETS would have prevented the problems and delivered better results.

Ultimately I think the lesson to take away from the previous discussion is that no practical supposed anti-aliasing filter is going to allow a DSO to operate accurately at close to its Nyquist frequency.  If the pulse response is corrupted, then a faster DSO is required for accurate measurements.  Sampling at 2.5 times the bandwidth?  Just laugh.  Low frequency Gaussian (1) roll-off DSOs have a pulse response which is corrupted in a different way but at least it can be easily modeled.

Do not rely on a 100 MHz instrument for 3.5 nanosecond transition time measurements.

(1) They do not really have a Gaussian roll-off but it is pretty close.  But consider *why* they have an almost Gaussian rolloff instead of a single pole roll-off; the transition band is not the product of a single stage.  This answer also explains why the maximally flat response of a higher bandwidth DSO is not the result of an anti-aliasing filter; it is the result of equalization to clean up the response unless it is designed in for market segmentation based on bandwidth.  Take a really close look at the difference in pulse shape between a Rigol 50 MHz DS1054Z and 100 MHz DS1104Z.  Or compare the pulse shape of your pet DSO at full bandwidth and with the 20 MHz bandwidth limiter active.  If they are different, then one of them is not Gaussian.


 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Compensating bandwidth with software
« Reply #29 on: April 16, 2019, 01:52:04 pm »
The Fourier transform of a boxcar BW filter is a sinc(t) quite independent of the sample rate.  So if you want accurate step responses you must have a gradual filter slope at the upper frequency limit.  We did not see such things before the advent of DSOs because they were too hard to build in analog form. In the limit it requires an infinite number of poles.

For an 8 bit ADC, a single pole filter would require an Fc  8 octaves below Nyquist.  Sampling at 1 GSa/s to provide a 4 MHz BW DSO would not sell very well.  So multipole filters are very necessary for a practical instrument.

Seismic processing is *very* obsessive about pulse shape.  For marine work the boat goes out to deep water, drops of a buoy with a recorder and a hydrophone at sufficient depth below the surface.   Not having been involved in that work, I don't know off the top of my head what "sufficient" is.  As the the signature test, as it is called, is done in deep water there is only a single reflection at the sea surface to contend with.  The boat makes a pass shooting and the response at small angles from vertical is recorded.  The array of guns is designed to optimize the pulse shape around a narrow range of angles near vertical.

From this an inverse (Wiener prediction error) filter which compresses the actual waveform to a symmetric (aka zero phase) band limited impulse is computed.   This filter is typically applied as the first step in processing.  It can be applied at any point if all the processing steps are linear, but many processors are skeptical that is actually the case.

It used to be the case that one also applied a correction for the minimum phase response of the anti-alias filters and amplifiers in the AFE of the recording system.  However, I would expect that correction is now applied in the recording system using an FPGA.  The corrections are instrument specific and it is *very* hard to determine when boats made changes to their recording systems when reprocessing old data.  And finding the instrument impulse tests is often impossible.

As part of my FOSS DSO FW project I am examining the problem of the optimal anti-alias filter in detail.  I am also examining the optimal interpolation filter.  While it is commonly described as being sinc(t),  that is actually not the proper operator.  The proper operator is the minimum phase Fourier transform of the filter transfer function.  It is a sinc(t) if and only if the filter is a zero phase boxcar.  That's not physically realizable in a purely analog form.

All the DSOs (6 OEMs)  I've looked at apply a symmetric, zero phase interpolator.  As a consequence, there is a precursor ripple that precedes a step.  On some instruments you can avoid that by turning off the interpolator or switching to dot mode.  However, some instruments always apply the interpolator even when in dot mode.

I'd like to note that the Gaussian is not the only filter shape which is symmetric under a Fourier transform.  A sech(x) form is also shares the time-frequency symmetry properties of a Gaussian.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: Compensating bandwidth with software
« Reply #30 on: April 16, 2019, 10:07:47 pm »
For an 8 bit ADC, a single pole filter would require an Fc  8 octaves below Nyquist.  Sampling at 1 GSa/s to provide a 4 MHz BW DSO would not sell very well.  So multipole filters are very necessary for a practical instrument.

Yet most practical DSOs do not have this!  So how can they be practical?

Quote
Seismic processing is *very* obsessive about pulse shape.  For marine work the boat goes out to deep water, drops of a buoy with a recorder and a hydrophone at sufficient depth below the surface.   Not having been involved in that work, I don't know off the top of my head what "sufficient" is.  As the the signature test, as it is called, is done in deep water there is only a single reflection at the sea surface to contend with.  The boat makes a pass shooting and the response at small angles from vertical is recorded.  The array of guns is designed to optimize the pulse shape around a narrow range of angles near vertical.

These are specialized applications operating at low frequencies where anti-aliasing filters in the linear analog, sampling analog, and digital domain are completely practical.  An example of this are the "better than Bessel" integrated filters made by Linear Technology.  Increasing digital integration has made higher sampling rates and post acquisition DSP with minimal filtering in the analog domain the most economical way these days although it becomes very expensive albeit still viable at the highest bandwidths and sample rates which gets back to the subject of this discussion.

Quote
It used to be the case that one also applied a correction for the minimum phase response of the anti-alias filters and amplifiers in the AFE of the recording system.  However, I would expect that correction is now applied in the recording system using an FPGA.  The corrections are instrument specific and it is *very* hard to determine when boats made changes to their recording systems when reprocessing old data.  And finding the instrument impulse tests is often impossible.

These corrections were also applied to analog oscilloscopes to correct things like dribble up in delay lines.  With proper testing, you can actually see the result of this.  These filters can generate preshoot in the transient response if the applied edge is fast enough which is rather disconcerting on an analog instrument.  I have seen it on the Tektronix 2465 series oscilloscopes but *not* the older but slightly faster 7800 and 7900 instruments suggesting a design difference.

Quote
All the DSOs (6 OEMs)  I've looked at apply a symmetric, zero phase interpolator.  As a consequence, there is a precursor ripple that precedes a step.  On some instruments you can avoid that by turning off the interpolator or switching to dot mode.  However, some instruments always apply the interpolator even when in dot mode.

I have seen that before and concluded it was the Gibbs phenomenon.  I did not have a fast enough pulse generator to test it on a Tektronix MDO5000 when I had a chance but the DSP bandwidth filters exhibited it and the hardware analog filters did not as expected.

As part of my FOSS DSO FW project I am examining the problem of the optimal anti-alias filter in detail.  I am also examining the optimal interpolation filter.  While it is commonly described as being sinc(t),  that is actually not the proper operator.  The proper operator is the minimum phase Fourier transform of the filter transfer function.  It is a sinc(t) if and only if the filter is a zero phase boxcar.  That's not physically realizable in a purely analog form.

It seems we have parallel DSO projects going.  I am less worried about anti-aliasing and correcting the response in the digital domain because I want to make something more like an improved DPO style DSO where variable sample rate and variable depth histograms are created during decimation for essentially zero blind time.  The objective is a real time index graded display which is completely faithful to an analog display but with complete digital measurement capability.

Performance of this design is completely dominated by memory bandwidth so large record lengths are useless because the memory never has time to be accessed.  I would be ideal for an ASIC or FPGA using internal memory only.

 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Compensating bandwidth with software
« Reply #31 on: April 17, 2019, 12:16:21 am »
25 is dot mode with the sinc(t) turned off.  26 is vector mode with the sinc(t) turned on.  It's an ignorance issue.  The Gibbs' phenomenum is completely different. The input is a 100 ps pulse generator from Leo Bodnar.

The R&S RTB2k has a 2 pole anti-alias filter according to Rich in a response to questions someone asked.  You can also pickup additional low pass filtering by choosing an output buffer amp with the appropriate gain-BW product.  Whatever you choose for the BW, the higher frequencies *must* be suppressed by 6 dB per bit at that frequency and above.

DSP started with seismic in 1952 using paper, pencil and a desk calculator.  Enders Robinson did it for his PhD under Norbert Wiener.   It didn't become possible to do it by machine until the early 60's.   No one else could afford it, so it was strictly limited to oil exploration.  Texas Instruments was a subsidiary of Geophysical Services, Inc.  that built recording gear for GSI.  Mobil Oil manufactured their own geophones.  Most large oil companies did similar things because they didn't want to wait for a commercial product.  They only stopped when commercial products were as good or better.

I'm starting with the AFE to DRAM chain because I want to implement LeCroy style stackable math operations.  There is no need to update the screen faster than 60 fps, so I don't see an issue with writing to DRAM  via the FPGA at high speed and reading the buffers at lower speeds.

I don't think most of the people developing DSO FW have ever used a scope.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf