Author Topic: Analog domain and aliasing  (Read 4601 times)

0 Members and 1 Guest are viewing this topic.

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21679
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Analog domain and aliasing
« Reply #25 on: May 20, 2018, 06:37:19 am »
There is no such thing as a continuous time digital signal. That is an oxymoron.

Except for comparators, most classic logic gates including AND, OR, NOT, NAND, XOR, and R-S latches, and PWM drives. PWM can actually be continuous or discrete or both (where the rising edge is triggered by a clock but the falling edge can happen any time).  You can even make a continuous time sigma-delta modulator if you want to.

Unless by digital you mean "stored on a computer" in which case of course you are right, but most people consider logic gates pretty much the definition of digital, and they don't require a clock in sight.

Indeed, the continuity of (combinatorial) logic is the very problem -- propagation delays must be fully accounted for, before the state machine is latched.  This is what limits the clock speed of your CPU.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Analog domain and aliasing
« Reply #26 on: May 20, 2018, 01:00:57 pm »
Fair enough, but logic gates are analog circuits which operate over a limited range.  I generally take "digital" to mean discretized, i.e. sampled.  Probably because logic gates have become so scarce.  I wish that were not the case as sometimes you really want fast bistate analog circuits.  Of course, we do have FPGAs, but those are more work to configure.

I designed a single fast pulse generator using a 7400 to turn itself off, so the pulse length was the propagation delay through the gates.  my MSOX3104T shipped today, so I'll see if I can find the thing and measure the pulse.  I was working on a cable tester to test cables for shorts and wanted a way to generate a really short pulse to test the circuit.  Building tools to build tools to build tools to .....
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Analog domain and aliasing
« Reply #27 on: May 20, 2018, 01:40:43 pm »

If you sample a perfect sine wave at random sampling intervals, you will see the sine wave if your sampling density is high enough. As your sampling density gets lower you will lose the ability to distinguish the sine wave and it'll eventually turn into white noise. However, there will be no aliasing - you will never see a sine wave of wrong frequency (except by pure coincidence as with monkey which can type encyclopedia by randomly banging a typewriter).

This is what I should have said if asked until about five years ago.  As it turns out, you can randomly  sample at about 1/5 of Nyquist and exactly recover multiple sine waves of different frequencies.   Moreover, aliasing does not take place because the transform of a random series of spikes is a single spike.  Very much counterintuitive.    You can arrive at the transform of the random spike series by considering the series terms pairwise.  Each pair produces a sine or cosine in the frequency domain which are only in phase at one point.  It really blows your mind to see a demonstration.  A friend showed me the Lena image with every other sample deleted. Then he showed me the image with 80% of the remaining samples randomly deleted.  It wasn't as good as the original, but *very* close.

Compressive sensing is a hot topic in academia.  I am working on implementing it on a Zynq based DSO.

https://statweb.stanford.edu/~donoho/Reports/2004/CompressedSensing091604.pdf

If you want to know why it works, be warned, the math is very complex.  In one of Donoho's papers the proof of theorem one is 15 pages! Fortunately, the other two theorems are 2-3 sentences.

The important mathematical aspect of this is presented in this paper:

https://statweb.stanford.edu/~donoho/Reports/2004/l1l0EquivCorrected.pdf

There are also proofs via regular polytopes in N dimensional space.  That was a *real* mind bender.

As a practical matter you set up Ax=y and attempt to solve it via a least absolute error (L1) method.  Linear programming is easy as you can use GLPK.  If it works, it is provably correct.  There is a very small possibility it will not work in most cases.  But it depends upon the signal being sparse in some domain so if the signal is sufficiently broadband it fails.  The requirements are that any combinations of  the columns of A have negligible crosscorrelation and that most of the elements of x are zero.  The first requirement is called the "Restricted Isometry Property" .  It's NP-hard, so you can't test it.  Solving Ax=y is also NP-hard, but if the conditions are met, an L1 algorithm will find the L0 solution.  I consider the work of Emmanuel Candes and David Donoho to be the most significant advance in applied mathematics since Wiener and Shannon.  For an old  research level reflection seismologist to say that is a *big* deal.

The entire subject is summarized in "A Mathematical Introduction to Compressive Sensing" by Foucart and Rauhut, Birkhauser 2013.
 
The following users thanked this post: RoGeorge

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3717
  • Country: us
Re: Analog domain and aliasing
« Reply #28 on: May 20, 2018, 11:28:33 pm »
This is what I should have said if asked until about five years ago.  As it turns out, you can randomly  sample at about 1/5 of Nyquist and exactly recover multiple sine waves of different frequencies.   Moreover, aliasing does not take place because the transform of a random series of spikes is a single spike.  Very much counterintuitive.   

Compressive sensing is a hot topic in academia.  I am working on implementing it on a Zynq based DSO.

This isn't quite true.  Random sampling still has aliases.  That is: there are multiple possible waveforms that will produce the same set of samples.  The difference is, most of those aren't a sine wave.  So if you know you are looking for one or a handful of sine waves, then you can reconstruct the signal with very low sample rates.  In a trivial limiting case, if you have a single sine wave with no noise you can reconstruct it by almost any three samples.  They can be hundreds of cycles apart, or all withing a fraction of a cycle.  All you do is standard curve fitting.

This is generally the case with compressed sensing: the signals don't have to be sine waves, but you have to have a strong prior about plausible signals.  The goal of randomized sampling is to prevent multiple plausible signals from aliasing onto each other.
 
The following users thanked this post: petert

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Analog domain and aliasing
« Reply #29 on: May 21, 2018, 12:06:06 am »
This is what I should have said if asked until about five years ago.  As it turns out, you can randomly  sample at about 1/5 of Nyquist and exactly recover multiple sine waves of different frequencies.   Moreover, aliasing does not take place because the transform of a random series of spikes is a single spike.  Very much counterintuitive.   

Compressive sensing is a hot topic in academia.  I am working on implementing it on a Zynq based DSO.

This isn't quite true.  Random sampling still has aliases.  That is: there are multiple possible waveforms that will produce the same set of samples.  The difference is, most of those aren't a sine wave.  So if you know you are looking for one or a handful of sine waves, then you can reconstruct the signal with very low sample rates.  In a trivial limiting case, if you have a single sine wave with no noise you can reconstruct it by almost any three samples.  They can be hundreds of cycles apart, or all withing a fraction of a cycle.  All you do is standard curve fitting.

This is generally the case with compressed sensing: the signals don't have to be sine waves, but you have to have a strong prior about plausible signals.  The goal of randomized sampling is to prevent multiple plausible signals from aliasing onto each other.

Sorry, but I have to call you on the assertion that random sampling has aliases. It *does* not for precisely the reason I stated, the Fourier transform of a random spike series asymptotically approaches a spike at DC.  Five years prior to running into sparse L1 pursuits I spent a month or two studying the dissertation Bin Liu wrote under Mauricio Sacchi at Alberta on minimum weighted norm regularization.  I decided against implementing it because I couldn't figure out what the Fourier transform of a random spike series was.  And I was a year or two into compressive sensing before I finally got it.

Candes' original experiment was to recover a random combination of sine waves and impulses.  Read this:

http://statweb.stanford.edu/~candes/papers/ExactRecovery.pdf

You do not need priors.  The result is a consequence of sparsity and convex optimization.  It took me 3 years and 3000 pages to get my head around what was going on.  It's quite amazing and I think will be of equal or greater impact than Wiener's work.

Yes, it turns the world as we were taught inside out, but it's true.  I suffered through reading Foucart and Rauhut twice before I stated reading the original papers.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Analog domain and aliasing
« Reply #30 on: May 21, 2018, 01:14:35 am »
This isn't quite true.  Random sampling still has aliases.  That is: there are multiple possible waveforms that will produce the same set of samples.

You need to distinguish aliases and sampling errors. For example, you're measuring 100MHz sine wave with regular sampling, but your clock is off, and it comes out as 99.95MHz sine wave. This is not aliasing. This is a consequence of various sampling errors. If you had better clock, you could've distinguished 99.95 from 100MHz just fine.

Aliasing occurs when there are two or more different waveforms which are indistinguishable in the absence of errors. In fact, errors, such as clock jitter, may make the aliased waveforms distinguishable from each other in practical terms.

If you consider random sampling, you cannot have two different waveforms which are theoretically indistinguishable. To distinguish them, you only need more sampling or more precise sampling.

 
The following users thanked this post: petert

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Analog domain and aliasing
« Reply #31 on: May 21, 2018, 02:50:04 am »
[
Aliasing occurs when there are two or more different waveforms which are indistinguishable in the absence of errors. In fact, errors, such as clock jitter, may make the aliased waveforms distinguishable from each other in practical terms.


Aliasing is a consequence of multiplication in time being a convolution in frequency and the fact that a regular spike series in one domain is a regular spike series in the other domain.  If the spikes in the frequency domain are more closely spaced than the BW of the signal aliasing will occur.
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3717
  • Country: us
Re: Analog domain and aliasing
« Reply #32 on: May 21, 2018, 03:45:11 am »
Sorry, but I have to call you on the assertion that random sampling has aliases. It *does* not for precisely the reason I stated, the Fourier transform of a random spike series asymptotically approaches a spike at DC.

As I explained further up thread, they aren't conventional aliases i.e., translated in frequency by a multiple of the sample rate, but the same phenomena absolutely exists.  For any particular record of samples, there are many possible analog waveforms which could generate it. This is trivially true: draw dots on a piece of paper and connect them any way you like.  There are infinitely many possibilities. If you want to interpret/treat the samples as representing a continuous analog signal, you have to decide which one you think it is.  For conventional sampling, we usually are assuming that the signal is band limited below Fs/2.  With random sampling you have to assume that the signal + noise is sufficiently sparse.
 
The following users thanked this post: petert

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Analog domain and aliasing
« Reply #33 on: May 21, 2018, 03:48:54 pm »
For any particular record of samples, there are many possible analog waveforms which could generate it. This is trivially true: draw dots on a piece of paper and connect them any way you like.  There are infinitely many possibilities. If you want to interpret/treat the samples as representing a continuous analog signal, you have to decide which one you think it is.

But what if we continue sampling (assuming we have a repeatable waveform)? The random sampling will be able to reconstruct the underlying waveform better and better. Given enough time, you can reconstruct any curve, no matter how the original dots are connected.

This is not the case with conventional sampling. Additional sampling will eventually stop providing new information. In extreme case, if you sample a sine wave with sampling rate matching the frequency, the inflow of new information will stop after only one sample, and all the other samples will be exactly the same to the rest of eternity.

« Last Edit: May 21, 2018, 04:53:18 pm by NorthGuy »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Analog domain and aliasing
« Reply #34 on: May 21, 2018, 04:51:07 pm »
For any particular record of samples, there are many possible analog waveforms which could generate it. This is trivially true: draw dots on a piece of paper and connect them any way you like.  There are infinitely many possibilities. If you want to interpret/treat the samples as representing a continuous analog signal, you have to decide which one you think it is.

But what if we continue sampling (assuming we have a repeatable waveform)? The random sampling will be able to reconstruct the underlying waveform better and better. Given enough time, you can reconstruct any curve, no matter how the original dots are connected.

This is not the case with conventional sampling. Additional sampling will eventually stop providing new information. In extreme case, if you sample a sine wave with sampling rate matching the frequency, the inflow of new information will stop after only one sample, and all the other samples will be exactly the same to the rest of eternity.

The quotation attributed to me above is incorrect.  I did not write that.

As for what follows, it has nothing to do with compressive sensing.  The random sampling technique described was used in sampling scopes particularly for microwave when ADCs had no hope of keeping up.  It worked because the signal was periodic.

The extreme case cited above violates the Nyquist criterion.  But a sinusoid can be completely described with 3 samples.

Compressive sensing works by finding the Fourier transform using an L1, rather than the traditional L2 method.  Once one has the transform one does an inverse transform using a conventional FFT to recover the time domain signal at regular sample spacing.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Analog domain and aliasing
« Reply #35 on: May 21, 2018, 04:55:34 pm »
The quotation attributed to me above is incorrect.  I did not write that.

I'm sorry. I messed up the quotes again. I fixed the original post.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14466
  • Country: fr
Re: Analog domain and aliasing
« Reply #36 on: May 21, 2018, 06:53:32 pm »
For any particular record of samples, there are many possible analog waveforms which could generate it. This is trivially true: draw dots on a piece of paper and connect them any way you like.  There are infinitely many possibilities. If you want to interpret/treat the samples as representing a continuous analog signal, you have to decide which one you think it is.

But what if we continue sampling (assuming we have a repeatable waveform)? The random sampling will be able to reconstruct the underlying waveform better and better. Given enough time, you can reconstruct any curve, no matter how the original dots are connected.

Absolutely, and I think this is where we were probably not all talking about the same thing. Given a periodic signal and infinite time, random-spaced sampling will lead to perfect reconstruction, whereas evenly spaced sampling won't if the Nyquist principle is not met. You're right.

Now on arbitrary signals (non periodic) and unsufficient sampling density, randomly spaced or not, you will get aliasing of some sort in the general case.
 
The following users thanked this post: petert

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Analog domain and aliasing
« Reply #37 on: May 21, 2018, 11:41:44 pm »
If you sample any arbitrary signal and perform an FFT, you are getting the discrete Fourier transform of a  signal which is periodic  over the period of the record.  This is inherent in the definition of the discrete Fourier transform.

For the continuous Fourier transform, the limits of integration are +/- infinity.  This imposes some constraints on what functions have Fourier transforms.

Random sampling of a periodic signal as was done in some early sampling oscilloscopes is not the same as random sampling as it applies to compressive sensing.

For any arbitrary signal which is periodic with a period equal to the length of the series.  That is, for which you accept the constraint of the discrete Fourier transform ,the discrete Fourier transform can be perfectly recovered with 10-20% of Nyquist sample density if two conditions are met.  The sampling intervals are random and the signal is sparse in the frequency domain.  Except for Gaussian noise,  most signals of interest are sparse in the frequency domain.  Were this not the case, image and audio compression would not be possible.

What Donoho proved was that it is not necessary to acquire data at Nyquist rates.  This is currently being done routinely at Stanford Medical Center for MRI data as it gives a factor of 5-10x reduction in data acquisition time and makes possible things like real time MRI imaging, albeit at the price of a lot of computer resources.  The reduction in data acquisition time was initially of most benefit with pediatric patients who tended to fidget.

You do not get "aliasing" in the Nyquist sense that a frequency above 1/2 the sampling rate appears in the data as a lower frequency in the case of random sampling unless the "random" sampling is not actually random.  You do get a convolution in the frequency domain.   But the Fourier transform of the random sequence is a spike at DC and very low amplitude white noise at every other  frequency.  If the series is infinitely long it converges to a Dirac function which is convolved with the true spectrum of the signal yielding the true spectrum of the signal without any aliasing.  This applies to *any* arbitrary signal.

In physically realizable implementations time is quantized.  This imposes some constraints on the bandwidth that can be acquired by compressive sensing as does the length of the series.  This is discussed to some degree in Foucart and Rauhut, but i still find the matter a bit opaque.  Experience has shown that a 5-10x increase in bandwidth relative to the number of samples collected is typically possible.

None of this in any way contradicts Wiener, Shannon and Nyquist.  It's merely a special case which was not explored until recently because sufficient computer resources were not available.  Compressive sensing can  also be set up to suppress noise in the input data in the same manner that can be done by applying a threshold to the eigenvalues in a K-L transform.
 
The following users thanked this post: RoGeorge

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14466
  • Country: fr
Re: Analog domain and aliasing
« Reply #38 on: May 22, 2018, 02:32:31 pm »
Yes, I've read the paper and it's interesting, although pretty hairy. It's quite a bit more involved than just taking some samples in the time domain at random intervals. So again, I'm not sure we are all exactly thinking of this in the same manner nor for the same practical applications.

In all cases, and as some have mentioned, even though it may sound too trivial, if your time-domain samples happen to lie outside of some chunks of signal, you're just going to lose this information. Let's say we have a portion of signal that is zero everywhere except there is just a short pulse in between. Let's say all the random samples happen to lie only on zero. No way you're going to reconstruct the lost "pulse". Now if you can repeat random sampling on the same chunk of signal (window) enough times, you will eventually get enough samples. But for a *one-shot* record of limited density samples, you can't guarantee that. I think that is mostly what some of us meant. Now if you happen to know approximately where the information of interest will lie time-wise, that's a different story. Also, if you consider your signal in the frequency domain rather than in the time domain, it's also a bit different. But you'd have to get the spectrum first. Back to square one.

We are also confusing exact reconstruction of a bandwith-limited signal with lossy compression. Although both can lead to useful results, those are not exactly the same in the general case. It's sometimes more useful to know beforehand what you're going to lose spectral-wise than to know that what you're going to lose shouldn't matter much. Different use cases, although admittedly both can be close enough under specific assumptions. Of course I'm simplifying the concept but it's just to provoke some thoughts.

Also, in practice, getting randomly-distributed samples is not possible. You can just hope to approach randomness. And if you are in the digital domain, it will be pseudo-random anyway, given that you always deal with finite time resolution, even if you can get random generators from sophisticated analog stuff. Might be good enough, but it's not quite the theoretical randomness.

Anyway, looks like it has been discussed before in the following thread: https://www.eevblog.com/forum/projects/shannon-and-nyquist-not-necessary/

For those interested, some papers:

https://statweb.stanford.edu/~donoho/Reports/2004/CompressedSensing091604.pdf
https://www.researchgate.net/publication/268747805_Reconstruction_of_Sub-Nyquist_Random_Sampling_for_Sparse_and_Multi-Band_Signals
 
The following users thanked this post: RoGeorge

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Analog domain and aliasing
« Reply #39 on: May 23, 2018, 12:32:23 am »
Yes, I've read the paper and it's interesting, although pretty hairy. It's quite a bit more involved than just taking some samples in the time domain at random intervals. So again, I'm not sure we are all exactly thinking of this in the same manner nor for the same practical applications.

In all cases, and as some have mentioned, even though it may sound too trivial, if your time-domain samples happen to lie outside of some chunks of signal, you're just going to lose this information. Let's say we have a portion of signal that is zero everywhere except there is just a short pulse in between. Let's say all the random samples happen to lie only on zero. No way you're going to reconstruct the lost "pulse". Now if you can repeat random sampling on the same chunk of signal (window) enough times, you will eventually get enough samples. But for a *one-shot* record of limited density samples, you can't guarantee that. I think that is mostly what some of us meant. Now if you happen to know approximately where the information of interest will lie time-wise, that's a different story. Also, if you consider your signal in the frequency domain rather than in the time domain, it's also a bit different. But you'd have to get the spectrum first. Back to square one.

We are also confusing exact reconstruction of a bandwith-limited signal with lossy compression. Although both can lead to useful results, those are not exactly the same in the general case. It's sometimes more useful to know beforehand what you're going to lose spectral-wise than to know that what you're going to lose shouldn't matter much. Different use cases, although admittedly both can be close enough under specific assumptions. Of course I'm simplifying the concept but it's just to provoke some thoughts.

Also, in practice, getting randomly-distributed samples is not possible. You can just hope to approach randomness. And if you are in the digital domain, it will be pseudo-random anyway, given that you always deal with finite time resolution, even if you can get random generators from sophisticated analog stuff. Might be good enough, but it's not quite the theoretical randomness.

Anyway, looks like it has been discussed before in the following thread: https://www.eevblog.com/forum/projects/shannon-and-nyquist-not-necessary/

For those interested, some papers:

https://statweb.stanford.edu/~donoho/Reports/2004/CompressedSensing091604.pdf
https://www.researchgate.net/publication/268747805_Reconstruction_of_Sub-Nyquist_Random_Sampling_for_Sparse_and_Multi-Band_Signals

Take a regularly sampled series which is zero everywhere except for a single one.  Phase shift it by half a sample interval and look at it in the time domain.  You will see a sinc(t) function.  The only reason it looks like a spike is that all the zeros of the sinc(t) coincide with the other regular samples.

With random sampling, the sinc(t) will be recoverable just as readily as with regular sampling.

It's a *lot* more involved than just taking random samples in the time domain.  The randomness is essential to being able to solve an L0 problem in L1 time.  It's also essential to suppressing aliasing which I suspect is the reason it was done with sampling scopes.

I can't speak for others, but I'm not confusing lossy compression with exact recovery.  Exact recovery has been demonstrated in the noise free case.  In the real world noise is always present.  Do you really want to keep the noise?  Or do you want to eliminate terms which are 120 dB down from the largest component?

 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf