Author Topic: Fast ADC sampling at random intervals with a Zynq?  (Read 7492 times)

0 Members and 1 Guest are viewing this topic.

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #25 on: February 28, 2018, 12:56:32 am »
but I don't see how you can do that for the general case like an oscilloscope
Me neither. Some data are compressible, some are not. If a signal can be characterized by regular sampling containing N samples, which, in general case, are random and uncorrelated.
The thing is that samples from a signal are never uncorrelated. After all a signal can always be described as a series of frequencies at a certain point in time. You don't need all the points of a wave to reconstruct it. You only need enough information to be able to reconstruct it unambiguously.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online langwadt

  • Super Contributor
  • ***
  • Posts: 4427
  • Country: dk
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #26 on: February 28, 2018, 01:07:56 am »
but I don't see how you can do that for the general case like an oscilloscope
Me neither. Some data are compressible, some are not. If a signal can be characterized by regular sampling containing N samples, which, in general case, are random and uncorrelated.
The thing is that samples from a signal are never uncorrelated. After all a signal can always be described as a series of frequencies at a certain point in time. You don't need all the points of a wave to reconstruct it. You only need enough information to be able to reconstruct it unambiguously.

and if you know nothing about the signal other than the bandwidth that is Nyquist–Shannon 

 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #27 on: February 28, 2018, 01:25:19 am »
but I don't see how you can do that for the general case like an oscilloscope
Me neither. Some data are compressible, some are not. If a signal can be characterized by regular sampling containing N samples, which, in general case, are random and uncorrelated.
The thing is that samples from a signal are never uncorrelated. After all a signal can always be described as a series of frequencies at a certain point in time. You don't need all the points of a wave to reconstruct it. You only need enough information to be able to reconstruct it unambiguously.
and if you know nothing about the signal other than the bandwidth that is Nyquist–Shannon
Perhaps you should try to read about compressive sampling first before succumbing to Pavlov. A long time ago I have done some research myself into signal reconstruction and the idea behind compressive sampling isn't that far fetched for me.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #28 on: February 28, 2018, 01:32:55 am »
The thing is that samples from a signal are never uncorrelated.

If you sample white noise they are.

You introduce some sort of correlation by input filtering, or by otherwise limiting the bandwidth. If you then sample at much higher rate (say your input has 100MHz bandwidth and you sample at 1 Gs/s) then you introduce a lot of redundancy - you'll get essentially the same result if you sample at lower frequency. However if you lower your frequency to somewhere close to the Nyquist frequency. I understand that with random sampling you can remove aliasing by sampling at average frequencies way below Nyquist, but removing aliasing doesn't give you an ability to reconstruct the signal. Say, if you have a spike which is two samples wide, you may entirely miss it if you lower your sample frequency (whether you do regular or random sampling).

Of course, things change if you sample something periodic and must combine the data from different waves. Regular sampling may happen to be a multiple of the signal period, so you need to tweak your sampling frequency to get good coverage. Random sampling lets you acquire the multi-period sample at any frequency. But how can it be of any advantage with arbitrary non-periodic signals?
 

Online langwadt

  • Super Contributor
  • ***
  • Posts: 4427
  • Country: dk
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #29 on: February 28, 2018, 01:44:55 am »
but I don't see how you can do that for the general case like an oscilloscope
Me neither. Some data are compressible, some are not. If a signal can be characterized by regular sampling containing N samples, which, in general case, are random and uncorrelated.
The thing is that samples from a signal are never uncorrelated. After all a signal can always be described as a series of frequencies at a certain point in time. You don't need all the points of a wave to reconstruct it. You only need enough information to be able to reconstruct it unambiguously.
and if you know nothing about the signal other than the bandwidth that is Nyquist–Shannon
Perhaps you should try to read about compressive sampling first before succumbing to Pavlov. A long time ago I have done some research myself into signal reconstruction and the idea behind compressive sampling isn't that far fetched for me.

I have no problem with compressive sampling when you have prior knowledge that the signal is compressible, then it is no more magic than bandpass sampling

but how can you know that is the general case of an oscilloscope?
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #30 on: February 28, 2018, 02:48:02 am »
An analog scope only presents  a comb of frequencies determined by the timebase settings.  A digital storage scope acquires a BW determined by the sampling rate.  In typical usage a DSO displays only the same narrow range of frequencies that an analog scope displays.  It is only in single shot mode that you see the full bandwidth acquired by the DSO.  The rest of the time the display is dominated by the periodic component.

What a DSO displays in "normal" mode is a set of boxcars in time specified by the sweep rate, display width and trigger rate.  This is a set  of sinc functions in frequency.  This is basic Wiener-Shannon-Nyquist mathematics.  Relative to the Fourier spectrum set by the  sample rate, that signal may be "sparse" within the requirements of compressive sensing.  However, it may not.  But overwhelmingly the odds are that it is.  Donoho and Candes have presented rigorous proofs.  Should you desire rigorous proofs, you must read their papers. But be warned, after spending 15 pages proving a single theorem, Donoho remarks that doubtless the reader will be relieved to know that the proofs of theorems 2 and 3 are much shorter.  In fact both were 2-3 sentences.

In the figures I posted from F&R, there are 64 frequencies.  However, only 5 frequencies have non-zero coefficients.  There is a very sharp threshold between sparse and solvable and not sparse and not solvable.  In the latter case one must revert to Wiener-Shannon-Nyquist.

If you know nothing about the signal you can only acquire it with a single shot sweep of sufficient duration to capture the entire signal.  DSOs have made things much easier, but consider an analog storage scope.  Do you really expect to see a 1 nS pulse on a 1 mS duration sweep?  That pulse is one millionth of the sweep length.   A modern low end DSO will acquire 10 million samples at 1 GS/S.  But the display is typically  less than 1000 samples.  Zoom mode might let you find that in a one shot, but the only way you will see it in normal mode is if it is repetitive.

In practice, the signal of interest usually  dominates the spectrum and is sparse.  The coefficients of the transform at the other frequencies are noise and very low amplitude. This is the basis of all the lossy compression algorithms such as JPEG, MP3, etc.  Compressive sensing simply merges the sampling and the compression into a single operation.

DSOs display thousands of waveforms per second.  Is your eye able to discern that?  No.  Even if one uses a probability density  display a single spike will be undetectable.

I am not in the same league with Donoho and Candes.  So I cannot explain this as well as they can.  Both write well and provide all the rigor you can stand.  Unless you *really* care about the fine print, I suggest reading the introductions and skipping the proofs.

http://statweb.stanford.edu/~donoho/reports.html

https://statweb.stanford.edu/~candes/publications.html

There are references on their home pages one level up about lay press discussion of the subject.

Yes, we were taught that all this is wrong.  Which is why when I ran into it by accidentally doing it I *had* to know why.  It cost me a few thousand hours of effort over most of 5-6 years.  If after you have read their papers you still think it's wrong, take it up with them.  I've cited numerous peer reviewed papers.

To repeat an earlier comment.  The gist of the matter is solving Ax=y using an L1 criterion where y is a randomly sampled series and x is the positive frequency Fourier transform coefficients and then back transforming using the inverse FFT.  A 5 year old probably would not understand that, but it's as simple as I know how to make it.  A search on "compressive sensing"  will turn up a very large number of examples in the form of graphs and images.  The big breakthrough was showing the L1 has *very* different properties from L2.

If you're a computer geek and know what NP-hard is, then I strongly recommend the 2004 papers by Donoho on the equivalence of L0 and L1 solutions.  To the best of my knowledge, that is a major milestone.  I presume that the computational complexity crowd has been working feverishly on this, but I've not looked into the matter. So far as I know it is the first and only instance where large NP-hard problems have  been solved.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #31 on: February 28, 2018, 03:40:49 am »
In practice, the signal of interest usually  dominates the spectrum and is sparse.  The coefficients of the transform at the other frequencies are noise and very low amplitude. This is the basis of all the lossy compression algorithms such as JPEG, MP3, etc.  Compressive sensing simply merges the sampling and the compression into a single operation.

I see. So, you reconstruct the signal based on dominant frequencies and dismiss everything else. Sine wave would be displayed the best.  Signals which are not "sparse" (such as perfect square wave) will exhibit some distortions. White noise will not show at all. Just as with JPEG, most of the stuff looks perfect, but there will be some barely-noticeable distortions where you see sharp edges. This probably should work well for a human eye, and scope is a scope - the tool to "see" things. JPEG may indeed look just as good as, or sometimes even better than the underlying raw file.
 

Offline KrudyZ

  • Frequent Contributor
  • **
  • Posts: 278
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #32 on: February 28, 2018, 04:02:07 am »
But then, people pay the big bucks for oscilloscopes that show them the odd one out, the runt pulse, the one in a million event.
These are the ones you would be willing to discard, which would make the scope useless for many real life debugging purposes.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #33 on: February 28, 2018, 04:07:20 am »
but I don't see how you can do that for the general case like an oscilloscope

Me neither. Some data are compressible, some are not. If a signal can be characterized by regular sampling containing N samples, which, in general case, are random and uncorrelated, then it is absolutely impossible to represent the same signal with N/16 samples keeping the same level of accuracy. Such compression may only be possible if the original samples are not purely random and uncorrelated, but rather restricted in some way, such as limited in bandwidth, periodic etc.

For example, if you compress VHDL code into a ZIP file, you can make it much smaller, but if you try to do the same with random bytes, it will not compress at all.

Not that I know much of this topic, but... on a 1GS/s scope like a (low end Rigol) does not have anything close to 500MHz of bandwidth. So quite a lot of the information that could be in the 1GS/s stream can not be used. So maybe there is room for compression...
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #34 on: February 28, 2018, 09:09:12 am »
DSOs display thousands of waveforms per second.  Is your eye able to discern that?  No.  Even if one uses a probability density  display a single spike will be undetectable.
I'm not sure you understand the fundamentals of oscilloscopes if you hold this view, phosphor and persistence of vision allowed it to work with analog scopes and now with adjustable persistence in DSOs at the extreme infinite setting you hold all captured events until cleared. Yes its a probability thing to capture it in the first place with the deadtime restrictions but rare (even singleton) events are easily visible.

You really need to simulate your idea with realistic signals to get an idea of its limitations before investing the time and effort in a hardware implementation.
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8651
  • Country: gb
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #35 on: February 28, 2018, 09:32:37 am »
I'd like a ELI5 on how throwing away 90% of the data will not lose any information in the general case like an oscilloscope

random interleaved sampling, sequential sampling, equivalent-time etc. sampling scopes work when you know you have a
repetitive signal
Throwing away 90% of the data will lose a lot of information from a completely arbitrary signal. However, just like MP3 throws away a lot of information, but keeps what is important for the particular case of what the human ear can detect, compressive sampling can keep adequate information for signals which fit particular constraints.

There are basically two types of lossy compression - source and sink. Source compression avoids wasting bits encoding what the source can't produce. Sink compression avoids wasting bits encoding what the sink can't detect. MP3 basically applies sink compression. Voice codecs (e.g. for cell phones) apply both source and sink compression, as a human voice can only produce a certain range of sounds. Compressive sampling is a form of source compression. Its lossy for an arbitrary signal, but if the signal fits certain constraints all the bits thrown away only encode things which are never in the signal of interest. The result can, therefore, be lossless for the signal of interest. Compressive sampling does well on signals which are spectrally sparse. It turns out a lot of real world signals are spectrally sparse. Consider that you can fully characterise any pure sine wave with just 3 samples, taken anywhere along the wave. Then, work up from there to things a little less sparse than a pure tone, and you might get an idea of the nature of compressive sampling.
« Last Edit: February 28, 2018, 09:53:30 am by coppice »
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8651
  • Country: gb
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #36 on: February 28, 2018, 10:15:33 am »
Not that I know much of this topic, but... on a 1GS/s scope like a (low end Rigol) does not have anything close to 500MHz of bandwidth. So quite a lot of the information that could be in the 1GS/s stream can not be used. So maybe there is room for compression...
The low end Rigol offers 4 channels of 100MHz, sharing a single 1GS/s converter. That's 400MHz of total bandwidth, which isn't far behind the 500MHz that is theoretically possible.
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #37 on: February 28, 2018, 02:27:46 pm »

Sparse L1 pursuits (which is my preferred term for this) allow having an arbitrarily large dictionary (the A matrix) .  That was what I was doing when I stumbled through the looking glass.

So if you construct a dictionary which contains transient events of interest they will easily be found provided that on a statistical basis there is a high probability that there will be samples that coincide with the transient.  The choice of the A matrix is *very* important.  The main focus of F&R is the mathematical properties of the A matrix.  To a large degree the choice of A matrix governs how many samples you need for a particular signal.

In a regular Fourier basis, square waves whether perfect or not are *very* sparse.  The longer the trace, the sparser it gets.

Fig 1.1 in F&R is a pair of photos.  One is at original size and the other is a JPEG at 99% compression (it's printed in a BW half tone).  It is anything but a repetitive signal and I can find no discernible differences.  Even by crossing my eyes and looking at them in stereo.

This is entirely a statistical wager.  It is NOT guaranteed to work.  In fact, it is guaranteed to fail if the x vector is not sparse.  A very important  variation on this is denoising data.

By comparing the reconstructed signal to the acquired data one can generate an error trace showing samples not well represented.  That's far more useful than a 256 level histogram generated at 130,000 waveforms per second.

I do want to try for more BW from a 500 MS/S ADC, but as a first trial regular sampling and random decimation is clearly the way to go.  It's much simpler to implement and will let me evaluate the computational costs of solving the L1 problem.  The charge transfer errors can be quantified and incorporated into the A matrix, but I think that is best left for later.

One interesting aspect of this is that once one has the reconstruction,  a great deal of analysis can be done.  Consider a quasi square wave with jitter.  Once one has solved for the Fourier spectrum, one can pick the modes of the harmonics, synthesize a jitter free representation and then compute statistics on the jitter and other errors.

In all cases, Shannon still applies. Nyquist does not apply in compressive sensing because we no longer have a regular spike series.  But the Shannon information content is  quantified in the basis chosen for the A matrix.  For example, a chirp is much broader in a Fourier basis than it is in a basis which consists entirely of chirps.  One could choose a chirp basis in which a single coefficient in the x vector completely described the chirp.  This sort of thing is best covered by Mallat in the last few chapters.  All of this merges almost seamlessly with wavelet transforms.  Many theorems critical to compressive sensing were first proved by Mallat et al.

All of this is a consequence of a very subtle but important difference between L2 and L1.  In the past L1 was almost as intractable as L0 which is NP-hard.  Generally for an NP-hard problem you have to try *all* the possible answers to find the best one.  So relatively little work was done on L1 solutions outside of operations research which only got started in the 40's.
« Last Edit: February 28, 2018, 06:54:36 pm by rhb »
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #38 on: February 28, 2018, 08:05:59 pm »
Not that I know much of this topic, but... on a 1GS/s scope like a (low end Rigol) does not have anything close to 500MHz of bandwidth. So quite a lot of the information that could be in the 1GS/s stream can not be used. So maybe there is room for compression...
The low end Rigol offers 4 channels of 100MHz, sharing a single 1GS/s converter. That's 400MHz of total bandwidth, which isn't far behind the 500MHz that is theoretically possible.

The Rigol DS1000Z Spec sheets say "Analog channel: 1 GSa/s (single-channel), 500 Msa/s (dual-channel), 250 MSa/s (3/4-channel)", so I guessed that they had four 250Sa/S ADCs, which could be interleaved for the higher sample rates when using fewer channels.

However, as you say, 4 four channel / 250MSa/s setting you are not guaranteed to have much wiggle room left for compression.
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Online langwadt

  • Super Contributor
  • ***
  • Posts: 4427
  • Country: dk
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #39 on: March 01, 2018, 09:07:16 am »
Not that I know much of this topic, but... on a 1GS/s scope like a (low end Rigol) does not have anything close to 500MHz of bandwidth. So quite a lot of the information that could be in the 1GS/s stream can not be used. So maybe there is room for compression...
The low end Rigol offers 4 channels of 100MHz, sharing a single 1GS/s converter. That's 400MHz of total bandwidth, which isn't far behind the 500MHz that is theoretically possible.

The Rigol DS1000Z Spec sheets say "Analog channel: 1 GSa/s (single-channel), 500 Msa/s (dual-channel), 250 MSa/s (3/4-channel)", so I guessed that they had four 250Sa/S ADCs, which could be interleaved for the higher sample rates when using fewer channels.

I guess it is something like this: http://www.analog.com/media/en/technical-documentation/data-sheets/hmcad1511.pdf
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #40 on: March 01, 2018, 09:16:12 am »
Not that I know much of this topic, but... on a 1GS/s scope like a (low end Rigol) does not have anything close to 500MHz of bandwidth. So quite a lot of the information that could be in the 1GS/s stream can not be used. So maybe there is room for compression...
The low end Rigol offers 4 channels of 100MHz, sharing a single 1GS/s converter. That's 400MHz of total bandwidth, which isn't far behind the 500MHz that is theoretically possible.

The Rigol DS1000Z Spec sheets say "Analog channel: 1 GSa/s (single-channel), 500 Msa/s (dual-channel), 250 MSa/s (3/4-channel)", so I guessed that they had four 250Sa/S ADCs, which could be interleaved for the higher sample rates when using fewer channels.

I guess it is something like this: http://www.analog.com/media/en/technical-documentation/data-sheets/hmcad1511.pdf

Yes - exactly like that!

Quote
The HMCAD1511 is a versatile high performance low power analog-to-digital converter (ADC), utilizing time-interleaving to increase sampling rate
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #41 on: March 02, 2018, 05:09:11 pm »
Here's a crack at explaining compressiive sensing  to a very smart 12 year old.

You need only 3 measurements to determine the amplitude and phase of a sine wave.  So if the signal is the sum of 12 sine waves, you need 36 measurements.  This is basically the Shannon information content.  So if you have a signal which has a maximum frequency  of 1 MHz, the Nyquist sampling interval needs to be 1/2 microsecond.  However, if you collect 18 microseconds of data you will have a very poor frequency resolution.  The window in time is a convolution with a very broad sinc(x) in frequency.  So the result will be useless.

On the other hand, if you randomly collect 36 samples over a longer interval, the window in time will be longer and the resolution in frequency will be better.  If it is "long enough" the Fourier spectrum will have many near zero coefficients.  They will not be zero because sinc(x) extends to infinity, but they will be so small that they are not of interest.  They are just an artifact of the sampling window.  The correct answer is the amplitude and phase of 12 sinusoids.  However, in general the sinusoids will not correspond exactly to a bin in the DFT.  So one will have more than 12 coefficients with significant amplitude.  But for enough samples and a long enough window, the DFT will be "sparse".  If one tries every possible combination of P frequencies from the N frequencies of the DFT and chooses the one which when evaluated at the sample times best matches the data samples you will have the optimal DFT of the data series.

For any realistic example, evaluating the error for all 1 <= P <= N possible combinations is computationally impossible.  It is NP-hard as there are N! possible combinations and the factoral grows very fast. Until 2004 that was what everyone believed to be the case.  There were a few instances where this was done without mathematical proof much as Heaviside used operational methods  for years before the mathematicians proved why it worked. It took quite a while to prove why and under what restrictions the Fourier transform works.

In these two papers:

https://statweb.stanford.edu/~donoho/Reports/2004/l1l0approx.pdf
https://statweb.stanford.edu/~donoho/Reports/2004/l1l0EquivCorrected.pdf

Donoho proves that an L1 solution of Ax=y  is equivalent to an L0 (i.e. NP-hard) solution *if and only if* the solution is "sparse" subject to some restrictions on the nature of the A matrix. Skip the math, just read the introductions.

This is a completely general result. It applies to *any*  arbitrary signal and *any* arbitrary transform.  If a sparse representation exists in any domain, with sufficient samples collected in such a manner that all the columns of A in Ax=y have the Restricted Isometery Property such that the coherence of any combination of P of the N columns is small enough, then the x vector will be sparse and unique.

Suppose that a column is the sum or difference of two other column and the crosscorrelation of any pair is zero.  That violates the RIP requirement and thus the problem cannot be solved.

The mathematical fine print for what are "enough" samples and what is a "sparse" solution is *very* painful.  It makes the rules for the Fourier transform look simple and was a wild ride through things I'd never heard of.

In the end, the only practical way to find out if a problem meets the necessary criteria is to attempt to solve it.  If it works the criteria have been met and if it doesn't they have not.  Choosing a different A matrix might allow solving it.  The only way to find out is to try it.  In practice it turns out that for an overwhelming fraction of all possible problems it succeeds.  Rigorous bounds have been derived for the sampling requirement, but in general evaluating the bounds is NP-hard.

This blew my mind when I stumbled into it.  It still does.  Which is why I want to try it out on real signals.  A Zynq base DSO just happens to be a very convenient platform for testing it.

As an aside, randomly selecting samples collected at regular intervals has a problem.   For a long enough series one can approximate the random series as the sum of several regularly sampled series.  This means that there will be regular spikes in the FT which are closely spaced (large interval in time, == small interval in frequency).  This will result in a lot of aliasing, however, it should be very low amplitude relative to the peaks found via an L1 solution of Ax=y.  It is just an analog to quantization errors in a traditional analysis.  Even with perfectly random sample times, the finite representation required to feed the problem to a computer will generate low amplitude noise artifacts.

Edit: added "such"  to read "such that the coherence" as intended.
« Last Edit: March 02, 2018, 07:51:29 pm by rhb »
 

Offline KrudyZ

  • Frequent Contributor
  • **
  • Posts: 278
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #42 on: March 02, 2018, 06:04:57 pm »
While this sounds all very interesting, I'm not convinced how this could be applicable to a real time oscilloscope for the majority of use cases.
Very quick intuitive example:
You have the output of a geiger counter and want to measure the pulses coming out of it and keep count.
The pulse spacing is completely random and the pulse width is short.
Doing random sampling will miss many if not most of the pulses and the missing information cannot be recovered from the frequency spectrum of the locations where the sampling did occur.
Same thing with bit error rate testing. You need to sample on every bit for the measurement to have any value.
Real time scopes are used to find exactly the parts of a signal that deviate from the norm and are not repetitive. Since those would be rare, they would get dropped by the compressive sampling algorithms (unless I'm misunderstanding this)...
So again, what kind of signals do you have to measure, where this approach would have any clear advantage over a standard oversampled system?
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #43 on: March 02, 2018, 07:49:22 pm »

The pulses are of finite duration.  The sampling requirement is that some portion of the pulse is observed.  Low pass filtering the pulses might reduce the required mean sample rate by broadening the pulses.  However, it's important to keep in mind that Geiger counter measurements are purely statistical.  No meaning is attached to an individual pulse.  So over any meaningful period of time, dropping a small number of pulses would not significantly  change the result.

Run length digital compression seems a pretty good analogy to  analog compressive sensing in a lot of settings.  In a bit error rate case  when the mean sampling rate exceeds some factor times the bit rate CS *will* apply.

I don't know if there is any advantage or not.  It's a trade off between sampling and compute.  I am not claiming this is better.  I'm merely commenting upon an ongoing experiment.  I'm not working in a corporate R&D program, so I can comment whereas others cannot.

I feel very confident that Keysight, Tektronix and Rohde & Schwartz have staff looking at compressive sensing.  Management would be grossly negligent if they were not investigating it.  Take a look at the single camera pixel papers by Mark Davenport and other students of Richard Baraniuk at Rice. TI is using CS in their IR spectrometer product software library. After 14 years, there are tens of thousands of pages of peer reviewed professional papers in engineering and mathematics on the subject.  Unfortunately, many of them are in IEEE publications and difficult or expensive to access.  Look at Nicholas Tzou's dissertation at Georgia Tech. He built a compressive sampling DSO for his dissertation.  If your PhD project fails you have to start over.  This is not something I made up.

There is a rather long list of mathematicians  at Stanford, Rice, Oxford and other places of similar rank who  vouch for the statement in the quote below.  What I've said is probably not *exactly* correct, but I don't think anyone wants to endure the 400 pages of math needed for a more accurate statement.  I surely don't.  Reading it twice was enough.  I'll read it again if I run into problems that suggest I missed something.  I've done enough numerical experiments that I doubt I've missed any show stoppers.  I've just not tried doing it on a Zynq.

This is a completely general result. It applies to *any*  arbitrary signal and *any* arbitrary transform.  If a sparse representation exists in any domain, with sufficient samples collected in such a manner that all the columns of A in Ax=y have the Restricted Isometery Property such that the coherence of any combination of P of the N columns is small enough, then the x vector will be sparse and unique.

NB I corrected a phrase to read  "such that the coherence" in the quote as was intended. 
 

Online KE5FX

  • Super Contributor
  • ***
  • Posts: 1894
  • Country: us
    • KE5FX.COM
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #44 on: March 02, 2018, 10:31:42 pm »
A lot of his links are dead now, unfortunately, but I found Terence Tao's 2007 blog post to be a good introduction to compressive sensing.  Tao is one of those rare mathematicians (a Fields Medalist, no less) who can communicate effectively with us uninitiated muggles.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #45 on: March 02, 2018, 11:04:45 pm »
A lot of his links are dead now, unfortunately, but I found Terence Tao's 2007 blog post to be a good introduction to compressive sensing.  Tao is one of those rare mathematicians (a Fields Medalist, no less) who can communicate effectively with us uninitiated muggles.

Interesting article. Unlike the OP, the author doesn't seem to imply that the method produces better results, but merely saves the storage space at the expense of increased processing effort.
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #46 on: March 02, 2018, 11:14:16 pm »
Good article.  I'd not seen it.  Thanks for pointing it out.  It connects nicely to Mallat and wavelet transforms.  I was doing basis pursuit for the simple reason it produces a globally optimal answer and I did not have to write the solver.  I just wrote a program that generated an input for the glpsol program in GLPK and then ran it. 

I only went back and read all of Mallat after I made the first pass the F&R.  I'd had the 2nd of Mallat and bought the 3rd, but had never been motivated enough to suffer through all that very unfamiliar math.  They just sat on the shelf awaiting a day when I might need them.

Of course, 10 years later there is a staggering amount of additional work that has been done including multiple graduate level mathematics  texts such as Foucart and Rauhut.

Sparse L1 pursuits go far beyond just compressive sensing.  Whether acquiring data with an effective sample rate of  say 10 GS/S with a slower ADC is "better" or not depends upon your application.  I've still not sorted the relationship between the sample time granularity and BW limit.    The fact that it eliminates the need for anti-alias filters seems to me a fairly significant advantage.  But TANSTAFL.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #47 on: March 02, 2018, 11:51:15 pm »
Look at Nicholas Tzou's dissertation at Georgia Tech. He built a compressive sampling DSO for his dissertation.
All of that work relied on sampling a signal that was sparse and/or repetitive, its not a general purpose signal acquisition system. These techniques have useful applications but not as a general purpose oscilloscope and the benefits compared to the well established methods of equivalent time sampling are questionable.
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8651
  • Country: gb
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #48 on: March 03, 2018, 10:32:43 am »
Donoho proves that an L1 solution of Ax=y  is equivalent to an L0 (i.e. NP-hard) solution *if and only if* the solution is "sparse" subject to some restrictions on the nature of the A matrix. Skip the math, just read the introductions.

This is a completely general result. It applies to *any*  arbitrary signal and *any* arbitrary transform.  If a sparse representation exists in any domain, with sufficient samples collected in such a manner that all the columns of A in Ax=y have the Restricted Isometery Property such that the coherence of any combination of P of the N columns is small enough, then the x vector will be sparse and unique.
No. This doesn't apply to any arbitrary signal. It applies to any arbitrary signal which is sparse. Leave out the word sparse and you've lost most of the audience. The distinction here is that we have been lossy compressing specific forms of sparse signal for decades - voice, images, etc. - but compressive sampling works for any arbitrary sparse signal. You don't need to know the characteristics if the signal in advance, and apply a carefully crafted compression which suits the signal's characteristics.

Compressive sampling will not make an oscilloscope that will answer questions like "what the heck is happening on that pin" because what is happening on that pin may not be sparse. It might permit useful instruments which allow people to look in depth at a signal which is known to be sparse. It should be possible for those instruments to also give an indication that compressive sampling isn't working for the particular signal being observed, and that some other tool may be needed.
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3483
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #49 on: March 03, 2018, 02:37:06 pm »
Donoho proves that an L1 solution of Ax=y  is equivalent to an L0 (i.e. NP-hard) solution *if and only if* the solution is "sparse" subject to some restrictions on the nature of the A matrix. Skip the math, just read the introductions.

This is a completely general result. It applies to *any*  arbitrary signal and *any* arbitrary transform.  If a sparse representation exists in any domain, with sufficient samples collected in such a manner that all the columns of A in Ax=y have the Restricted Isometery Property such that the coherence of any combination of P of the N columns is small enough, then the x vector will be sparse and unique.
No. This doesn't apply to any arbitrary signal. It applies to any arbitrary signal which is sparse. Leave out the word sparse and you've lost most of the audience. The distinction here is that we have been lossy compressing specific forms of sparse signal for decades - voice, images, etc. - but compressive sampling works for any arbitrary sparse signal. You don't need to know the characteristics if the signal in advance, and apply a carefully crafted compression which suits the signal's characteristics.

The counter example is that I may craft a dictionary which contains *any* arbitrary signal as one of the columns of A.
 
Consider  an A matrix composed of random values and a y vector which is a single column from A.  I had that fail to produce an x vector with a single non-zero value  and thought I had found a bug in GLPK.  I was quickly told to use a better PRNG such as the Mersenne Twister.  Once I had a properly random A it worked just fine.  I should note that with the standard C PRNG I got a sparse result.  I just wasn't a single non-zero result because of periodicities in the C PRNG.

Compressive sampling will not make an oscilloscope that will answer questions like "what the heck is happening on that pin" because what is happening on that pin may not be sparse. It might permit useful instruments which allow people to look in depth at a signal which is known to be sparse. It should be possible for those instruments to also give an indication that compressive sampling isn't working for the particular signal being observed, and that some other tool may be needed.

Except for one shots, scopes are designed to show signals which are repetitive.  What is  displayed is an overlay of a short window in time whose start is either periodic in the case of recurrent sweep "auto" mode or which is periodic relative to some arbitrary trigger.

This is an engineering R&D  project.  The mathematics are well established.  The only unknowns are implementation details related to the behavior of the ADCs and the computational burden of solving Ax=y.

My original question has been well addressed by the suggestion to just throw away all but randomly selected values.  For an initial prototype that is quite sufficient.  That suggestion also provided me with insight into how to analyze the Fourier transform of a random spike series.  I have struggled on and off for 10 years to get a clear mental picture of the Fourier transform of a random  spike series.

My original interest in the transform of a random spike series was sparked by reading a dissertation on the regularization of seismic data by one of Maurice Sachio's students in Alberta.  It had been suggested to me as a potential commercial software  product.  I spent several months working on it but concluded that the method had flaws and dropped the project.  I now question whether the perceived flaws were real or a misunderstanding on my part. The "flaws" were very similar to some of the arguments raised here.

 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf