Author Topic: downsampling a signal: blockwise average: danger of aliasing?  (Read 1979 times)

0 Members and 1 Guest are viewing this topic.

Offline hercTopic starter

  • Contributor
  • Posts: 21
  • Country: de
dear forum - members, i have a question:
i want to downsample a 90Hz signal quite a bit to 1Hz. if i use simple blockwise averaging, am i risking aliasing?
i guess that taking the average of a 90 frames wide block is already a form of lowpass-filtering, that is required to avoid aliasing?
but i simply cannot find any more details on how blockwise averaging compares to the more sophisticated signal decimation methods, like used here:

https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.decimate.html#scipy.signal.decimate

i have not a good feeling about signal.decimate, because this statement irritates me: "The downsampling factor. When using IIR downsampling, it is recommended to call decimate multiple times for downsampling factors higher than 13." this is vague. how many times i am expected to do decimate? should i rather use FIR filtering? is FIR filtering also problematic with the strong downsampling i want to achive?

any opinions?
 

Offline ataradov

  • Super Contributor
  • ***
  • Posts: 11228
  • Country: us
    • Personal site
Re: downsampling a signal: blockwise average: danger of aliasing?
« Reply #1 on: June 12, 2021, 08:01:28 pm »
With block-wise averaging you will see aliasing. It is not that different from just picking the points at fixed intervals. Moving average is a form of FIR filter, it would prevent aliasing.

Almost all decimation implementations will not recommend going more than 10 times in one pass. Otherwise your filter size becomes huge. In your case decimate by 10 and then by 9 and you will get overall 90 decimation factor. 

As far as efficient implementations go, cascaded integrator-comb (CIC) implementations are unbeatable. Integer only math, interpolator/decimator with an integrated filter. CIC itself is a form of moving average, just implemented efficiently.
Alex
 
The following users thanked this post: tom66

Offline hercTopic starter

  • Contributor
  • Posts: 21
  • Country: de
Re: downsampling a signal: blockwise average: danger of aliasing?
« Reply #2 on: June 13, 2021, 07:27:11 am »
thank you very much! so i will apply - as you suggested - two steps of decimation using scipy.signal.decimate.

i also found this document:

https://www.analog.com/media/en/technical-documentation/dsp-book/dsp_book_Ch15.pdf

that states that indeed the moving average FIR filter is not a good lowpass filter, but a good noise filter that preserves the step response.

"Not  only  is  the  moving  average  filter  very  good  for  many applications, it is optimal for a common problem, reducing random white noise while keeping the sharpest step response. ..... In  short,  the  moving  average  is  an exceptionally  good  smoothing  filter  (the  action  in  the  time  domain),  but  an exceptionally bad low-pass filter (the action in the frequency domain)."
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8106
  • Country: fi
Re: downsampling a signal: blockwise average: danger of aliasing?
« Reply #3 on: June 13, 2021, 09:54:21 am »
Aliasing is always a problem, there is no way to completely prevent aliasing, it's just about "how much" it happens and does it matter.

I recommend you get Matlab or Octave so you can build IIR/FIR filters and plot the response using the freqz function. Try a moving average IIR, or a simple 11111111 coeff box FIR first to see what the attenuation is at half of the new (decimated) sample rate (the Nyquist limit for the output signal). Then compare to the more optimal FIR coeffs you can design with said software, simplest way basically calling the fir1 function to design the filter for you. You'll see huge differences in stopband attenuation.

OTOH, the question often is, what signal is aliasing? Are you just preventing low levels of noise from aliasing? Or do you have a strong (maybe much stronger than the in-band signal of interest) offending signal outside the band? Sometimes -10dB stopband attenuation is more than fine, sometimes -100dB isn't enough.

What comes to performance, if you have just 90Hz signal, so assuming you are not sampling at too many hundreds Hz, maybe unless you are running a tiny 8-bit CPU at 1MHz, you may have a lot of excess performance so you can run a decent FIR with a lot of taps, if you need; this would be easy to implement. The upside in FIR is consistent and predictable group delay (half the number of taps).

The question really, as always, is, what are you actually doing?
« Last Edit: June 13, 2021, 09:58:37 am by Siwastaja »
 

Offline hercTopic starter

  • Contributor
  • Posts: 21
  • Country: de
Re: downsampling a signal: blockwise average: danger of aliasing?
« Reply #4 on: June 14, 2021, 06:42:16 am »
@Siwastaja: thank you for your detailed and knowledgeable answer.
i am training a neural network on several data streams (for example, steering wheel angle, pedal values, current gear value etc.) from a vehicle-simulator. for the time beeing, this happens offline from a prerecorded dataset. hence, realtime requirements are - for now - not important. i want to predict the driver expertise from these datastreams over a sliding window of 50 seconds. this works already with the simple blockwise-average. i just wonder if results will improve with better downsampling that minimizes aliasing. i will try out and compare different methods and let you know what works best for training the network.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8106
  • Country: fi
Re: downsampling a signal: blockwise average: danger of aliasing?
« Reply #5 on: June 14, 2021, 06:58:44 am »
Well, steering wheel angle can easily contain some over-90Hz noise originating from the engine vibration coupling to the sensor or steering axis, and so on, and aliasing will transform that into low frequencies, making it appear someone is making slight steering actions while in reality is not.

But how much matters? Assuming this vibration has an amplitude of +/- 1 degrees, it won't also cause more than +/- 1 degrees aliased false signal with absolutely no filtration. Even very simple filtering is able to decrease that by 10x easily. This is why, often in practice, a combination of cheap analog RC filter and simple digital "blockwise" average suffices.

Also remember you need analog antialiasing before the ADC. The easiest today is to just use excessively high sample rate ADC, allowing you to use a crappy 1st order RC filter - for example, a 1MSPS ADC and 1st order RC with -3dB cutoff at 10kHz. Then filter and decimate digitally.

The idea is, whenever there is sampling, any frequency content over half the sampling frequency will alias. ADC does sampling, but decimation is also sampling, and the output rate of the decimator is what matters. And you need to filter before this sampling. Before ADC, it naturally can only be an analog filter, but before a decimator, it can be digital, with all the advantages of digital filters - especially steep transition and strong stopband attenuation, easily allowing you to use, say, 100Hz sampling rate which actually contains signals up to maybe 40-45Hz. But try to do that on analog and you are quickly in the rabbit hole of expensive components and hiring a seasoned analog designer.
« Last Edit: June 14, 2021, 07:01:43 am by Siwastaja »
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7302
  • Country: nl
  • Current job: ATEX product design
Re: downsampling a signal: blockwise average: danger of aliasing?
« Reply #6 on: June 14, 2021, 10:02:09 am »
If you use scipy.signal.decimate, you can just give it the FIR parameter, and it will do the filtering for you. So maybe call it twice, downsample 9 and then 10 times, enable the filter. I wouldn't worry about the filter too much as NN training should be able handle noise on the input anyway.
 

Online langwadt

  • Super Contributor
  • ***
  • Posts: 4391
  • Country: dk
Re: downsampling a signal: blockwise average: danger of aliasing?
« Reply #7 on: June 15, 2021, 03:48:13 pm »
With block-wise averaging you will see aliasing. It is not that different from just picking the points at fixed intervals. Moving average is a form of FIR filter, it would prevent aliasing.

block-wise averaging and moving average + decimate is basically the same operation
 

Offline ataradov

  • Super Contributor
  • ***
  • Posts: 11228
  • Country: us
    • Personal site
Re: downsampling a signal: blockwise average: danger of aliasing?
« Reply #8 on: June 15, 2021, 06:55:37 pm »
Yes, you are correct.

Although it still does not help with aliasing at all. Below is a spectrum of a sine waves at 3, 17, 123, 573,1337 and 3579 Hz sampled at 10 kHz and then downsampled by 100. So only 3 and 17 Hz components would survive.

Black is the original spectrum. Blue is downsampled using matlab's decimate() function using the default IIR filter, and red is blockwise average.

Red is only slightly better than just selecting points without any averaging at all (not shown to not further clutter the plots).
Alex
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf