Author Topic: Shannon and Nyquist not necessarily necessary ?  (Read 6361 times)

0 Members and 1 Guest are viewing this topic.

Offline danadakTopic starter

  • Super Contributor
  • ***
  • Posts: 1875
  • Country: us
  • Reactor Operator SSN-583, Retired EE
Shannon and Nyquist not necessarily necessary ?
« on: May 13, 2017, 09:58:40 am »
Love Cypress PSOC, ATTiny, Bit Slice, OpAmps, Oscilloscopes, and Analog Gurus like Pease, Miller, Widlar, Dobkin, obsessed with being an engineer
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8651
  • Country: gb
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #1 on: May 13, 2017, 10:35:04 am »
If you think this in any way affect the necessity to meet the Shannon criteria, you probably need to go back and read Shannon. If you read good material on compressive sampling, rather than the populist garbage, they address this.

There is this weird thing in popular media where they love the idea that something new disobeys one or more of Shannon's theses. So far none do, even if they might superficially appear to.
 

Online Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11647
  • Country: my
  • reassessing directives...
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #2 on: May 13, 2017, 11:46:54 am »
shannon nyquist theorem is about minimum sampling rate given the highest frequency order, it has nothing to do with data compressibility/repeatability. if anyone try to say, we can sample at far lower rate than shannon criteria just because data is repeatable, without distinguishing between sampling and compression (anaylisis) process, then he is forever delusional. data repeatability only can be analyzed after its being captured/sampled, or estimated on a very special type of data set beforehand. some data are compressible, some are not. nyquist is talking about any type of data in general regardless of compressibility. sure repeatable data can end up in far fewer data length than nyquist criteria, but this is the second stage "after" sampling (at nyquist rate) is being made, or lower rate if and only if, data pattern estimate has been studied beforehand and the nature is indeed following the study result. otherwise or for none compressible data, the limit is the nyquist criteria, any longer data than this indicates redundant data.
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 
The following users thanked this post: helius, tooki

Offline danadakTopic starter

  • Super Contributor
  • ***
  • Posts: 1875
  • Country: us
  • Reactor Operator SSN-583, Retired EE
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #3 on: May 13, 2017, 12:44:19 pm »
Totally agree with prior posters.

Basically trying to show, or ameliorate an incorrect and widespread assumption, that all data has
to be sampled at 2X before one can reconstruct or obtain meaningful info from it. Kind of similiar
to lack of understanding that Fourier does not preserve temporal info, unlike wavelets.


The paper / presentation link discusses some of these issues.


Regards, Dana.
Love Cypress PSOC, ATTiny, Bit Slice, OpAmps, Oscilloscopes, and Analog Gurus like Pease, Miller, Widlar, Dobkin, obsessed with being an engineer
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #4 on: May 13, 2017, 01:09:20 pm »
I understood this as follows:

First, lets create a system with a perfect sampling satisfying Shannon's sampling theorem, and feed the samples into a lossy algorithm A. Yes, we have sampled the original signal perfectly but the lossy algorithm A will lose information.

Now, as the lossy algorithm A will lose information anyway, why try to sample the original signal perfectly anyway. Why not sample well enough so that we get same amount of information after lossy algorithm B** what we would get when using the perfect sampling with the original lossy algorithm A.

** We may need to modify/tweak the original lossy algorithm A used with perfect sampling when using the lossy sampling.
« Last Edit: May 13, 2017, 01:21:38 pm by Kalvin »
 

Online Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11647
  • Country: my
  • reassessing directives...
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #5 on: May 13, 2017, 01:26:50 pm »
lossy algorithm on visual data is ok, try again on adc data in sensitive measurement device or communication stream. people usually overlooked when they take brain cognitive power for granted.
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19509
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #6 on: May 13, 2017, 01:48:39 pm »
I've skimmed the slides and the language they've used sounds like they are triumphantly reapplying the old job interview question: "If you have an audio signal amplitude modulated on a 10MHz carrier, what is the minimum required sample rate?"

The answer is 8kS/s or so for voice, 44kS/s for music. Any answer of the order of 20MS/s is completely wrong.

Any claims that Shannon can be circumvented are equivalent to over-unity energy and perpetual motion machines.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: tooki

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #7 on: May 13, 2017, 02:43:31 pm »
Pretty simple...

Shannon-Nyquist wants to perfectly reconstruct the original signal without losses.  The linked article basically says "We have too many samples, let's lose some!".  They argue that the degraded result is adequate (and it may be) thus showing that they don't need enough bandwidth for the original image.

All true but so what?

It's true that I don't need all the bits of the .raw image my Nikon creates.  Fortunately, they provide a compressed version as well.  The compressed version IS good enough for my purposes but perhaps there are others who deal with images at a level of detail that requires the .raw image.

Even the compressed version is too large for most forums to use as attachments so I have to load the image into something like PhotoImpact and lose even more pixels.  The result is still good enough for the purpose.

This image processing has nothing to do with Shannon-Nyquist and everything to do with excess information.

 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6722
  • Country: nl
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #8 on: May 13, 2017, 08:43:02 pm »
Non uniform sampling is big in academia, but it's too complex to be useful here. Better to use a more sensible real time compression method.
« Last Edit: May 13, 2017, 08:44:51 pm by Marco »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #9 on: November 21, 2017, 11:57:54 pm »
I started the compressive sensing thread, but stuff happened and did not follow up.  I did not visit EEVblog for a long time.  My apologies.  Somehow I just stumbled across this thread

Shannon *always* applies.  Nyquist does or does not depending upon whether the signal is "sparse" in some basis.

In other words, if a signal is the sum of 4-5 sinusoids, it can be *exactly* recovered with far fewer than Nyquist samples.  Doing this is computationally intensive, but L1 (least summed absolute error) solutions were shown by David Donoho of Stanford to be identical to the L0 (exhaustive search) solution.  L0 is NP-Hard.  The following is the most important paper in my view.  You can very reasonably spend a few years of your life reading the work of Donoho and Emmanuel Candes.  They have been staggeringly prolific.

https://statweb.stanford.edu/~donoho/Reports/2004/l1l0EquivCorrected.pdf

The reason that non-uniform sampling is so valuable is simple, but *very* difficult to grasp.  I looked at the problem 10 years ago and gave up.

Simply put, the Fourier transform of a spike series in time is a spike series in frequency.  A comb in one space is a comb in the other. This is why aliasing occurs with uniform sampling.  But the Fourier transform of a random spike series in time is not a random spike series in frequency, so there is no aliasing in the conventional sense.  Shannon still applies, but the difference between Donoho and Nyquist is pretty mind boggling.  Even more so if you've been doing DSP for 30 years before you encounter Donoho.

As it turns out, in the real world much of the time the "signal" is sparse.  The non-sparse part is random sensor noise.  As a result sparse approximations can be used for noise removal in addition to compression.

It is not correct to assume that one must perform Nyquist sampling and the do compression as a separate step.

I consider Donoho's work the most important work in signal processing since Norbert Wiener in the 1940's.  For over 60 years that was the gospel.  Wavelet theory started eroding that in the late 80's, but was grossly misrepresented by many people.  I was caustically sarcastic about wavelet theory for a long time.  I was wrong. My objections to the claims about frequency resolution were valid, but there was more to wavelet theory than I realized.  Having read Mallat's 3rd edition cover to cover, I think I have done appropriate penance.

For what I find a mind boggling example, search on:

"Single-Pixel Imaging via Compressive Sampling"

For reasons known only to the illuminati at google, I can't get a link that doesn't pass through google.  There's an IEEE paper which is generally paywalled and then a longer version at citeseerx.psu.edu

 
The following users thanked this post: The Soulman

Offline The Soulman

  • Frequent Contributor
  • **
  • Posts: 949
  • Country: nl
  • The sky is the limit!
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #10 on: November 22, 2017, 12:48:14 am »
Maybe I should re-read in 10 years and then would understand but might as well ask now.
How is it possible to violate nyquist and under-sample a signal without the risk of aliasing??
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #11 on: November 22, 2017, 01:04:55 am »
It blew my mind when it sank in.  A friend showed me examples done using the Lena image.  Take half the data looked awful.  Take a random 20% of that, looks very good.  At this point, I know why, but I can't visualize it as I can Fourier transform pairs.  The result is more noisy than Nyquist sampling, but it's not aliased.  The noise is recognizable as noise which is not the case with aliasing due to regular sampling.

I stumbled into this when I realized some work I had been doing violated my "knowledge" of DSP.  So I went looking for why this was possible.  That was three years and several thousand pages of text ago.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8651
  • Country: gb
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #12 on: November 22, 2017, 02:54:38 am »
Maybe I should re-read in 10 years and then would understand but might as well ask now.
How is it possible to violate nyquist and under-sample a signal without the risk of aliasing??
Nyquist documented an observation, but never developed a theory. Various people in the 19th and early 20th century actually made similar observations. Shannon, Whittaker, and Kotelnikov developed a proper mathematically supported theory, and that theory says a bit more than the Nyquist observation. Some things violate Nyquist, as it is not a complete description of sampling. Nothing violates Shannon, Whittaker and Kotelnikov's theory. Its mathematically watertight.

The Shannon/Whittaker/Kotelnikov theory is most often applied to a fixed interval real or complex sampling process, but it also works for non-uniform sampling which achieves a long term average rate that is twice (or for non-uniform complex sampling one times) the bandwidth being sampled. Note that the bandwidth could be in separate chunks. You could sample a signal that only has content from 1kHz to 2kHz and 3kHz to 4kHz at 4k real samples per second, and recover the signal exactly, because you have only sampled 2kHz of actual bandwidth.

Non-uniform underssampling means the bits of the band you capture and the bits you lose changes over time. For suitable signals this is an effective lossy sampling technique, losing lots of the short term details of the signal, but capturing something of the flavour of every part of it over a longer period.
« Last Edit: November 22, 2017, 02:59:37 am by coppice »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #13 on: November 22, 2017, 03:19:35 am »
Non-uniform underssampling means the bits of the band you capture and the bits you lose changes over time. For suitable signals this is an effective lossy sampling technique, losing lots of the short term details of the signal, but capturing something of the flavour of every part of it over a longer period.

Completely agree with everything else.    However, consider the sum of 3 sinusoids.  If that is irregularly sampled, it can still  be *exactly* recovered in the no noise case without loss by solving the L1 problem.  Donoho and/or  Candes have published proofs. The general real world case is as you describe.

Many thanks for the reply.  Not very many people have labored through this much math. I get rather lonely in the backwoods of Arkansas.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8651
  • Country: gb
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #14 on: November 22, 2017, 03:43:56 am »
Non-uniform underssampling means the bits of the band you capture and the bits you lose changes over time. For suitable signals this is an effective lossy sampling technique, losing lots of the short term details of the signal, but capturing something of the flavour of every part of it over a longer period.

Completely agree with everything else.    However, consider the sum of 3 sinusoids.  If that is irregularly sampled, it can still  be *exactly* recovered in the no noise case without loss by solving the L1 problem.  Donoho and/or  Candes have published proofs. The general real world case is as you describe.

Many thanks for the reply.  Not very many people have labored through this much math. I get rather lonely in the backwoods of Arkansas.
Knowing there are only small number of narrow spectral lines, and only needing to resolve their exact complex frequency, obviously reduces the number of data points you need massively. In practice noise really messes with things like that. It only takes 3 real samples to exactly capture a single sine wave, but even modest amounts of noise really mess with that beautiful simplicity. For example, the Teager-Kaiser Energy Operator is a really cool thing, and does get applied in a number of places, but its super sensitive to noise.
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #15 on: November 22, 2017, 04:25:23 am »
Quote
Knowing there are only small number of narrow spectral lines,

It is good to remember that a pure sine wave, (or multiple pure sine waves) technically have zero information content.  So any way you try to plug that into the shannon limit will end up with zero.

A pure sine wave / single frequency tone exists for all time, positive and negative.  While it oscillates, its parameters never change, and therefore it can transmit zero information.  Whenever you imagine you "starting" your transmission, the sine wave is already there.  Of course ideal sine waves don't exist in the real world.  Real signals always start and stop, and once it does that, it isn't a single frequency tone, and it can be used to transmit non-zero amount of information (such as a frequency and amplitude).  The point is, when you take something that is an approximation and loose sight of the limits of the approximation, you can get totally confused.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #16 on: November 22, 2017, 02:26:14 pm »
@coppice

One of the nice features of the L1 approach is noise immunity.  Traditional Fourier analysis uses an L2 norm which blows up the noise badly.  I'd not heard of Teager-Kaiser, but after skimming the start of a Danish thesis on the topic I can see why.  Interesting idea though.

Candes started all this with some experiments in recovering the amplitude, frequency and phase of sinusoids  and the time and amplitude of a set of spikes using a partial set of randomly chosen frequencies and spikes.  There was earlier work in the geophysical community on sparse spike deconvolution in the late 70's or early 80's, but computational cost made in impractical for most work.  We struggled with doing an L2 singular value decomposition  on an 11/780 in the late 80's for very small problems.

@ejeffrey

The information in this context is the frequency, amplitude and phase of the sinusoids.  I strongly urge reading the brief intro to Donoho's paper. 

Consider the task of determining the intermodulation products of a mixer.  One knows from theory that the mixer input and output are sparse in frequency space.  This is an ideal example of a situation where an L1 basis pursuit will perform extremely well.  The result is provably optimal.

The basic process is to compute *all* the possible answers and then use linear programming to select the sparse combination which best matches the data.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8651
  • Country: gb
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #17 on: November 22, 2017, 09:09:54 pm »
It is good to remember that a pure sine wave, (or multiple pure sine waves) technically have zero information content.  So any way you try to plug that into the shannon limit will end up with zero.
A sine wave carries no information in the information theoretic sense. However, it is often necessary to do something like extract and lock to a sine wave for synchronisation purposes, which is certainly extracting a kind of information from that sine wave.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6722
  • Country: nl
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #18 on: November 24, 2017, 06:30:55 pm »
As I indicated before, the problem with schemes like this is that any redundancy they can remove can also just be removed from an uniformly sampled signal with a lossy algorithm ... and that almost always makes more sense.

At most you'd do coarser sampling per block as a quick low complexity lossy compression step, like H264 motion vectors for instance.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #19 on: November 25, 2017, 12:04:23 am »
As I indicated before, the problem with schemes like this is that any redundancy they can remove can also just be removed from an uniformly sampled signal with a lossy algorithm ... and that almost always makes more sense.

At most you'd do coarser sampling per block as a quick low complexity lossy compression step, like H264 motion vectors for instance.

Do a search on "compressive sensing" after the Internet recovers from the Black Friday traffic.  As a single example, I'll cite the speed up in MRI data acquisition resulting from randomized sampling.  I would have included a link to an early paper, but DNS is crushed right now.

Donoho writes extremely well.  So a relatively painless way to get an idea of what's going on is to just read the introductions of his papers on his Stanford website.  Reading the proofs requires a strong masochistic streak. One runs to 15 pages!


This is the hairiest math I've ever tackled, but it is *not* snake oil any more than the assertion that  *any* function can be approximated by a Fourier series. TI uses compressive sampling in an optical spectrograph product, though they call it "Hadamard sampling" or some such.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #20 on: November 25, 2017, 12:43:07 am »
Interesting subject. Is there some reading material available which describes how to apply it on real world problems without needing a phd in math?
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline _Wim_

  • Super Contributor
  • ***
  • Posts: 1523
  • Country: be
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #21 on: November 25, 2017, 08:06:02 am »
Do a search on "compressive sensing" after the Internet recovers from the Black Friday traffic.  As a single example, I'll cite the speed up in MRI data acquisition resulting from randomized sampling.  I would have included a link to an early paper, but DNS is crushed right now.

Donoho writes extremely well.  So a relatively painless way to get an idea of what's going on is to just read the introductions of his papers on his Stanford website.  Reading the proofs requires a strong masochistic streak. One runs to 15 pages!


This is the hairiest math I've ever tackled, but it is *not* snake oil any more than the assertion that  *any* function can be approximated by a Fourier series. TI uses compressive sampling in an optical spectrograph product, though they call it "Hadamard sampling" or some such.

All of these systems are based on "learning" the signal while sampling and then undersampling based on what has been learned. They learn the signal by sampling randomly (to check all spectral content over time), and optimize the periodic sampling based on this. This had been said above also, once you know “something” about the signal, you can adapt your sampling rate accordingly.

Because this way of sampling is adaptive to the signal that is measured, it can achieve better results that approaches where the compromises in the sampling approach were made before the sampling started.

So for me these means if you MUST undersample (for bandwidth reasons for example), you lose less data by using an adaptive sampling approach then a fixed undersampling approach, but nobody claims you do not loose any data compared to Nyquist sampling
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6722
  • Country: nl
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #22 on: November 25, 2017, 11:13:13 am »
Do a search on "compressive sensing" after the Internet recovers from the Black Friday traffic.  As a single example, I'll cite the speed up in MRI data acquisition resulting from randomized sampling.

I can't escape the feeling there is some slight of hand going on here. The magnetic gradients are inherently highly linear, the ADC is just going to acquire samples at a fixed frequency. Where is the random subsampling coming from?

The Hadamard pattern TI uses isn't subsampling, it's sampling a optical transform of the spectrum.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #23 on: November 25, 2017, 01:42:13 pm »

I can't escape the feeling there is some slight of hand going on here. The magnetic gradients are inherently highly linear, the ADC is just going to acquire samples at a fixed frequency. Where is the random subsampling coming from?

The Hadamard pattern TI uses isn't subsampling, it's sampling a optical transform of the spectrum.

No sleight of hand.  It is not guaranteed to always work.  But *most* of the time it does.  This is work by some of the top mathematicians in the world.  Candes and Donoho are at Stanford, Baraniuk at Rice, Jared Tanner at Oxford. The list goes on.  There are now a slew of very intense graduate level monographs on the subject.  I strongly recommend "A Mathematical Introduction to Compressive Sensing" by Foucart and Rauhut for anyone who has the stomach for heavy math.  So far as I know it was the first, appearing in 2013.

As for the TI product,  RTFM. They don't explain it, just the advantages.  I was going to make a pitch for a free sample to implement compressive sensing, but once I read the manual I realized they had already done it.  The supplied software operates the linear photodiode array in two modes, regular sampling and randomized sampling.

There's nothing that says an ADC has to acquire data at a fixed rate. It acquires data on a trigger signal.

Here are a couple of papers.  Read the introductions and look at the pictures.  The math is optional.  I use the simplex solver in GLPK for this, but there are now numerous algorithms which are faster.

https://statweb.stanford.edu/~donoho/Reports/2004/CompressedSensing091604.pdf

https://statweb.stanford.edu/~donoho/Reports/2007/CSMRI-20071204.pdf
 

Online CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5234
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #24 on: November 25, 2017, 01:58:49 pm »
As always the devil is in the details.  Shannon is watertight when you apply all of the conditions, which include a bandlimited signal.  In the real world there is no such thing, since there is always noise.   But a well filtered signal is close enough to give good results in a very broad set of circumstances.   These new approaches do not guarantee perfect reproduction, but close enough in a broad set of circumstances.

They are the essence of good engineering - figuring out how to cheat on the "rules" to get a useful result.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf