Author Topic: Fast ADC sampling at random intervals with a Zynq?  (Read 7473 times)

0 Members and 1 Guest are viewing this topic.

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Fast ADC sampling at random intervals with a Zynq?
« on: February 26, 2018, 05:12:46 pm »
A key requirement for compressive sampling of a 1D data stream is taking samples at random delays from the previous sample.  The effective BW is a function of the granularity of the timing.   So  randomly sampling one of the 8 clock phases of a random clock interval will increase the BW by a factor of 8. 

Is this possible?
 
Can one setup a  descriptor in the Xilinx DMA IP that specifies a particular clock cycle and phase for taking a sample? 

Can one stream the interval to the next sample to the DMA engine fast enough?

Where should I look for that level of detail in the Xilinx documentation?

I've been through a good bit of the DMA documentation, but so far it has all been introductory material.  It's introduced me to the nomenclature.  The general DMA structure is what I would expect, but I've not seen anything I recognized as controlling sampling at that granularity.

 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #1 on: February 26, 2018, 05:22:27 pm »
I think you'll need a 2 steps approach:
1) A clock domain which does the sampling. By varying the phase using the DPLL (if that allows realtime adjustments) you can sub-divide the sampling interval.
2) A FIFO which takes the data from the sample domain into the regular clock domain.

The complexity depends on how fast you want to sample.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #2 on: February 26, 2018, 05:32:04 pm »
Are you asking about XADC?
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #3 on: February 26, 2018, 05:39:38 pm »
I want to go as fast as possible.  I've made an initial pass thru the ADC08DL502 datasheet, but have not looked into the details of triggering it.  I'm still looking for likely obstacles.   The XADC would be interesting only as a prototype.

The FIFO is an excellent point.  I had thought of that for implementing run length compression of LA data, but hadn't thought about it in the context of random sampling.   That will greatly simplify things as it doesn't require complex DMA descriptors.
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #4 on: February 26, 2018, 06:08:40 pm »
I was just looking at the ADC08DL502 datasheet some more and found this juicy bit of verbiage:

"Fine Phase Adjust. The phase of the ADC sampling clock is adjusted monotonically by the value in this field.
00h provides a nominal zero phase adjustment, while FFh provides a nominal 50 ps of delay. Thus, each code
step provides approximately 0.2 ps of delay.

Coarse Phase Adjust. Each code value in this field delays the sample clock by approximately 65 ps. A value of
00000b in this field causes zero adjustment.

 Intermediate Phase Adjust. Each code value in this field delays the sample clock by approximately 11 ps. A
value of 000b in this field causes zero adjustment. Maximum combined adjustment using Coarse Phase Adjust
and Intermediate Phase adjust is approximately 2.1ns."

Of course, I'm likely to be tormented for such ambitions  like Tantalus.  I'd need to be able to write the register very rapidly and it seems unlikely that TI is expecting that.  Whatever the outcome, it won't be boring.  0.2 pS would imply a 2500 GHz Nyquist which rather obviously is NOT going to happen.  Even with compressive sampling that would require at least 250 GB/S to DDR.    But it might be that all of the random delay timing can be handled by writing a PRN to the ADC sample phase registers and feeding the output to a FIFO which gets unloaded by the DMA engine to DRAM.  If that works then the bottleneck moves to the NEONs to solve an L1 minimization which is it's own special place in Hell.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #5 on: February 26, 2018, 06:55:17 pm »
How about clocking the ADC from a spread-spectrum oscillator if you want random intervals? A spread spectrum oscillator is basically a VCXO FM modulated with noise. What could give problems is when the ADC has an internal cleanup PLL. ADCs with high samplerates usually need clocks with a very low frequency jitter. What you want to achieve is the opposite. Perhaps this problem is better solved in the analog domain (noise source + VCO).
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #6 on: February 26, 2018, 06:59:02 pm »
I want to go as fast as possible.  I've made an initial pass thru the ADC08DL502 datasheet, but have not looked into the details of triggering it.

Then it entirely depends on the ADC. Because the ADC does the sampling, not FPGA. The phase adjustments through ADC registers will have some lag, so adjusting on a per-edge basis is most likely not doable.

It is possible to generate the ADC sample clock with the FPGA. When you want to change phase of individual edges, you can try to use routing delays or figure a way to use IDELAY elements to delay the clock's edges. I think this is possible, but requires certain degree of familiarity with the FPGA fabric. Then this is a question how ADC is going to react to such clock - I don't think you can find the answer to this in the datasheet, so the only way is to experiment.

Once you get the data into the FPGA you can de-serialize it as wide as you can. You pass the data through a FIFO which brings it into the clock domain with a regular clock (as opposed to your random ADC sampling clock), then you can pass it to your DMA (if it has enough bandwidth).

 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #7 on: February 26, 2018, 09:48:11 pm »
It may be worth considering using a high speed transciever to drive the ADC clock. That  way you could adjust the clock within a bit time without any clocking or IDELAY magic.

It won't be perfectly random, but with a 6Gb transciever you will be able move the clock edge with 133ns granularity, and you will be able to accurately timestamp things, and it will not rely on assumed behaviours and latency of PLLs and delay blocks.

Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #8 on: February 26, 2018, 10:34:33 pm »
I have to either know in advance or measure the sample times.  Aside from being difficult, measuring adds to the bandwidth I need to write to DRAM.

The time interval between each sample must be random (Mersenne Twister quality PRNG).  The datasheet for the ADC08DL502 is fairly vague about the phase adjustment.  It's intended to allow adjusting the timing when multiple ADCs are used.  They say that it degrades performance slightly if enabled.  However, it's not at all clear if that is the case for compressive sensing as there is a lot of exotic mathematics involved. It may well be that generating a clock with deterministic jitter is what is needed.  This is obviously not something a datasheet will tell you.  The most the datasheet can do is suggest some experiments.

The idea is to solve the matrix equation Ax=y where the y vector has been sampled at random times.  The A matrix is a Fourier basis evaluated at those times.  If and only if the x vector is "sparse" an L1 solution obtained is provably the optimal L0 solution.  If it is not sparse the problem is NP-hard and not solvable in practice.  Most of the time it is sparse.

That short paragraph summarizes what I think is the most important piece of applied mathematics since Norbert Wiener's work in the late 30's and early 40's.  Anyone who is interested should look at the 2004 papers on David Donoho's website at Stanford.  The proofs are fairly painful. The proof for one theorem  is 15 pages long, but Donoho writes well and the introduction to each paper describes things very clearly.

Even L1 solutions are very compute intensive. Thirty years ago in grad school L1 was something we simply could not do for practical size problems.  Even L2 was a struggle for large problems on a 4 MB VAX 11/780.  And "large" is pretty small on a time shared machine with 4 MB of memory.

I've only done L1 solutions using the GLPK solvers which are probably much too slow for the ARM cores in the Zynq.   However, there are other algorithms which are much faster.  Compressive sensing is in routine use for MRI imaging, at least at Stanford, but probably quite a few places by now as it dramatically speeds up the data acquisition.

The structure appears to be something like this:

   create a PRNG in the Zynq PL which writes to the phase registers on the ADC as fast as possible
   feed the ADC output to a FIFO implemented in the PL
   transfer the data from the FIFO to DRAM by AXI DMA
   at the end of a sweep,  trigger a PS interrupt to solve Ax=y (or use the PL with a suitable algorithm)
   sum all Fourier series with non-zero coefficients at regular sample intervals using a PL block
   display the sum on the screen

by using the same seed, the PRNG sequence is known so the A matrix can be precomputed.

This is all seriously non-trivial.  Georgia Tech awarded a PhD for doing this 4 years ago.  I've played this game of catch up before, so I'm under no illusions about the effort required.  In this particular instance I already know the math as I've been studying it for the last 4-5 years.  I know what I have to do.  I just need to work out an implementation.  To be useful I need to either reduce the trace buffer size by a factor of 5-10 or increase the BW by a factor of 5-10.  I rather expect that it will be easier to collect 500 MHz Nyquist data by sampling at an average rate of 50-100 MS/S than it will be to collect 2.5-5 GHz Nyquist data sampling at an average rate of 1 GS/S

Note, I am abusing the notion of Nyquist.  Because of the random sampling there is no aliasing.  It seems really weird if you've been doing traditional DSP for 30 years, but it's true.  The Fourier transform of a regular spike series is another regular spike series.  I still don't have a strong sense of what the transform of a random spike series looks like other than that there is only a single spike of significant size in the transform.

My standoffs came in the mail so it's time to start assembling some hardware.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #9 on: February 26, 2018, 11:11:52 pm »
Look back at my analog solution: if you sample the signal modulating the VCO then you also have the information about when the sample was taken. It is going to involve more than that (phase delays and so) but you probably get the gist of it. Another way would be to drive the VCO with a DAC which is fed from the PNRG.
« Last Edit: February 26, 2018, 11:30:36 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline langwadt

  • Super Contributor
  • ***
  • Posts: 4422
  • Country: dk
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #10 on: February 26, 2018, 11:57:12 pm »
is it just a matter of reducing the amount of data that needs to be stored?
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #11 on: February 27, 2018, 01:28:30 am »
@nctnico  But the analog approach would require measuring and storing the delays. It would also require constantly generating a new A matrix. Based on the conversation so far, I think that clocking the ADC using a clock with random jitter  is probably what will actually work.  With regular sampling jitter degrades the result.  But if accounted for mathematically it helps.  It should not be difficult to feed a PNRG output to a numerical comparator and generate a clock edge based on the sum of a series of PRNG outputs.

@langwadt  One can reduce the amount of data stored OR increase the effective Nyquist.  Very likely so long as one met the sampling requirements you could do a bit of both.  I shall NOT go into the mathematics of the sampling requirement.  They are very ugly and I'm not that good a mathematician.  In fact, my only virtue in mathematics is a high threshold of pain.

The sampling requirement is governed by what is called the "Restricted Isometry Property".   The RIP is a function of the maximum  coherence (i.e. the crosscorrelation)  among any P of the N columns of A.  That problem is NP-hard.  So in practice you solve the L1 problem and check the result.  If it's correct you're done.  If it's not, it's NP-hard and you can't solve it.  Probably the biggest benefit is that random sampling  eliminates the need for anti-alias filters.

I just stumbled into the mathematics by accident.  When I realized I had routinely been doing things I "knew" were impossible I got very interested and went looking for why this was possible.  That led to "A Mathematical Introduction to Compressive Sensing" by Foucart and Rauhut.  And that blew my mind.

I haven't actually mounted anything yet in the Dell Vostro carcass I'm using.  I stripped out the CD-ROM bracket and the old MB mounts and made cardboard templates for the boards this afternoon.  So except for the PSU there is just a 2 bay 3.5"  mount.  That gives me a 9" x 14" open area which should make it reasonably simple to mount the Zybo Z7, BeageleBoard X15, a MicroZed, Gigabit switch and 2-3 USB hubs. One objective is to see how fast I can log LA data to disk via eSATA and USB 3.0.
 

Offline KrudyZ

  • Frequent Contributor
  • **
  • Posts: 276
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #12 on: February 27, 2018, 04:49:26 am »
Your requirements seem a bit vague.
In order to get any meaningful suggestions, you would need to at least specify the longest and shortest desired interval between two consecutive samples, as well as the required resolution and accuracy.
As you are probably aware of, ADCs clock circuits are usually designed to MINIMIZE jitter, not to intentionally add to it.
Most high speed, high resolution converters have pipeline architectures and rely on each stage getting the same amount of time.
If you mess with that the conversion results will no longer be linear.
Furthermore, the data outputs are usually phase locked to the sample clock. A little bit of jitter there is OK, but if you are looking for big cycle to cycle variations the data transmission will no longer work.

I would also be interested to hear what type of signals you are trying to work with.
I started reading through the Foucart / Rauhut book you linked to and while I don't pretend to understand the math involved, some of the applications seem very interesting...
« Last Edit: February 27, 2018, 05:16:33 am by KrudyZ »
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #13 on: February 27, 2018, 03:26:41 pm »
Your requirements seem a bit vague.
In order to get any meaningful suggestions, you would need to at least specify the longest and shortest desired interval between two consecutive samples, as well as the required resolution and accuracy.
As you are probably aware of, ADCs clock circuits are usually designed to MINIMIZE jitter, not to intentionally add to it.
Most high speed, high resolution converters have pipeline architectures and rely on each stage getting the same amount of time.
If you mess with that the conversion results will no longer be linear.
Furthermore, the data outputs are usually phase locked to the sample clock. A little bit of jitter there is OK, but if you are looking for big cycle to cycle variations the data transmission will no longer work.

I would also be interested to hear what type of signals you are trying to work with.
I started reading through the Foucart / Rauhut book you linked to and while I don't pretend to understand the math involved, some of the applications seem very interesting...

Yes.  ADCs are designed that way  because  all the classical mathematics based on Wiener's "Extrapolation, Interpolation and Smoothing of Stationary Time Series"  requires regular sampling.  The monograph appeared during WW II as a classified report bound in yellow to denote its classified status and was popularly called "the yellow peril" because of the heavy math it contains.  It's kindergarten level relative to Foucart and Rauhut.

The spatial regularization of seismic data was a perennial topic at the Society of Exploration Geophysicists annual meeting for many years.  There have been a great many MS & PhD theses written on it and innumerable proprietary algorithms.  Then one day we woke up discover that irregularly sampled data  is not a problem, but a virtue.  But we were heavy sleepers and didn't wake up for 5 or 6 years.

For anyone interested in this lacking a high threshold of mathematical pain I strongly suggest reading the introductions of the papers at David Donoho's  and Emmanuel Candes' websites from 2004 to 2009.  Just skip the math proofs.  Search generally on "compressive sensing" and read the non-mathematical sections of the papers.  I read Foucart and Rauhut twice and Mallat's " A Wavelet Tour of Signal Processing" 3rd ed. because I wanted to understand why something I *knew* was impossible, was actually possible.

For many years I thought that you could regularize data by performing a discrete Fourier transform.  But when I actually got around to trying it, I discovered that it doesn't work because of the L2 assumption built into the definition.

As applicable to a DSO, what is being done is a forward  Fourier transform using an L1 (least summed absolute error) algorithm instead of the L2 (least squares)  implicit in the DFT  and then doing a normal inverse FFT.  What Donoho and Candes discovered is that the L1 solution has magical properties.

The figures are from Foucart and Rauhut.  The discrete Fourier spectrum shown at the top of Fig 1.2 is evaluated in the bottom part and then 16 of the 128 samples used to form the time domain trace are randomly chosen.  The top part of Fig 1.3  is the result of attempting to recover the spectrum using a DFT as I did.  The bottom is the spectrum recovered by solving an L1 problem.  The example is a noiseless case, but it demonstrates a 16x reduction in sampling to exactly recover the 128 coefficient series from its Fourier transform.  A sinc interpolator has been used for the figure in the bottom part of Fig 1.2.

I now  feel pretty confident that what is needed is to generate a clock with Gaussian distributed, zero mean jitter generated by a PRNG and use that to clock the ADC.  Because of the ADC conversion time,  increasing the BW is probably more difficult  than decreasing the data rate. In concrete terms, the shortest time to the next sample must be greater than 2 standard deviations less than the mean time.  My inference that the BW is governed by the granularity of the clock timing may not be entirely  correct.  I've never had an opportunity to discuss this with anyone else who is familiar with it.  A good friend who does won't wrestle with the math unless he's getting paid.

The first chapter  of F&R is easy going as it just discusses the motivation and applications.  It's really grim after that.  In practice it's like the FFT.  You just run the program once you have the software.

Search on "single pixel camera" for another application.  It's the work of Mark Davenport, Richard Baraniuk et al at Rice and *very* cool.  The math in F&R is also the key element in the algorithm that won the Netflix prize for predicting what movies people might like.

Explaining the details of the concept in response to people's comments has been a huge help.  Without it I would have blundered down several blind alleys.

I started the thread thinking about ADCs in the manner of an MCU where you can set a timer to go off and trigger the ADC sample.  However, the DDR mode of the ADC08DL502  samples on both edges of the clock.  Hence what's needed is a clock with a lot of deterministic  jitter.   As @nctnico pointed out, the AXI DMA stream to DRAM will need to be fed by a FIFO that buffers the irregularly clocked output from the ADC.
 

Offline KrudyZ

  • Frequent Contributor
  • **
  • Posts: 276
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #14 on: February 27, 2018, 05:06:00 pm »
You still need to answer the basic questions regarding your requirements.
What is the maximum and minimum sample to sample delay that you will need for your application and what granularity (step size) can you tolerate in this delay?
You cannot take a modern ADC that is meant to run at 500 Msps and feed it with a clock that has a cycle to cycle jitter of 1 ns.
The only ADCs you could possibly do this with would be a fully parallel FLASH converter without any pipeline stages and a parallel interface.
These are not very common anymore. Designing the clock driver with dynamic delay function for this sounds like a fun project...
With regular multi-stage pipeline ADCs, anything, but a simple random decimation of a continuous acquisition stream will be very difficult to pull off.
 

Offline langwadt

  • Super Contributor
  • ***
  • Posts: 4422
  • Country: dk
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #15 on: February 27, 2018, 05:49:46 pm »
You still need to answer the basic questions regarding your requirements.
What is the maximum and minimum sample to sample delay that you will need for your application and what granularity (step size) can you tolerate in this delay?
You cannot take a modern ADC that is meant to run at 500 Msps and feed it with a clock that has a cycle to cycle jitter of 1 ns.
The only ADCs you could possibly do this with would be a fully parallel FLASH converter without any pipeline stages and a parallel interface.
These are not very common anymore. Designing the clock driver with dynamic delay function for this sounds like a fun project...
With regular multi-stage pipeline ADCs, anything, but a simple random decimation of a continuous acquisition stream will be very difficult to pull off.

yeh, if the aperture window isn't short enough and timed accurately enough to match the higher rate you are screwed and the converter will have to able to convert in the shortest time between samples. So in the end you need a converter that works at the higher rate and you can then throw away samples

 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #16 on: February 27, 2018, 05:50:07 pm »
You still need to answer the basic questions regarding your requirements.
What is the maximum and minimum sample to sample delay that you will need for your application and what granularity (step size) can you tolerate in this delay?
You cannot take a modern ADC that is meant to run at 500 Msps and feed it with a clock that has a cycle to cycle jitter of 1 ns.
I'm wondering why the pipeline needs a steady clock at all. A single sample & hold is used to sample the input signal. After this there are several internal sample & hold stages but these don't have to rely on the clock at all. IMHO everything should work just fine for a long as the maximum clock frequency doesn't exceed the specification of the ADC. I think this is something which needs to be tested before jumping to conclusions.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline KrudyZ

  • Frequent Contributor
  • **
  • Posts: 276
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #17 on: February 27, 2018, 07:10:45 pm »
Multi-stage ADCs running at high data rates don't have enough time to transfer the full charge between their stages.
Each stage has a SHA and they are all timed off of the sample clock.
Any variation in the stage to stage transfer timing will affect the percentage of charge transferred (different points on the RC slope) causing linearity errors.
When delays are constant the partial charge transfer can be compensated for.
Using internal timing for the conversion is only used for slower (sub 10 MHz) converters.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #18 on: February 27, 2018, 07:13:29 pm »
This particular ADC has some characterization of the acceptable jitter in the datasheet:

Quote
The maximum jitter (the sum of the jitter from all sources) allowed to prevent a jitter-induced reduction in SNR is found to be

tJ(MAX) =(VINFSR/VIN(P-P)) x(1/(2(N+1)x?xfIN))

where tJ(MAX) is the rms total of all jitter sources in seconds, VIN(P-P) is the peak-to-peak analog input signal, and VINFSR is the full-scale range of the ADC,"N" is the ADC resolution in bits and fIN is the maximum input frequency, in Hertz, at the ADC analog input.
 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #19 on: February 27, 2018, 07:50:40 pm »
You still need to answer the basic questions regarding your requirements.
What is the maximum and minimum sample to sample delay that you will need for your application and what granularity (step size) can you tolerate in this delay?
You cannot take a modern ADC that is meant to run at 500 Msps and feed it with a clock that has a cycle to cycle jitter of 1 ns.
I'm wondering why the pipeline needs a steady clock at all. A single sample & hold is used to sample the input signal. After this there are several internal sample & hold stages but these don't have to rely on the clock at all. IMHO everything should work just fine for a long as the maximum clock frequency doesn't exceed the specification of the ADC. I think this is something which needs to be tested before jumping to conclusions.

 On the assumption of a 5-10x compression and an effective sample rate of 2 nS,  a rough guess is that the samples will be at intervals in the range of 2-100 nS.  I'll need to do some numerical modeling to nail it down more precisely.  The Gaussian distributed sample intervals is just a guess.  A Poisson distribution might work.  I've not read the Georgia Tech dissertation yet.  I just skimmed though it to see what had been done in a general sense.  The comment by @KrudyZ about the stage to stage charge transfer would make regular sampling and decimation very desirable.  Otherwise I'd need to  track the charge transfer at each stage for each clock period and apply a correction factor  That might  be quite a task. .  In any case, I need to generate a Mersenne Twister quality PRNG in the PL fabric and I need to create a AXI DMA engine fed by an interrupt from a fabric FIFO.  Both of those should be pretty easy to find as examples.

Because this is a very different mode of operation than TI's designers had in mind, the only way to find out is to try it in hardware.  That's what I bought the GDS-2072E  for and why I'm limited to the TI ADC08DL502.  I don't see much point in speculating about whether it will work or not with the hardware I have.  The dissertation used a custom board which is more work than I want to undertake.

The mathematics say it will work, but the hardware may decide it does not agree to the terms required by the mathematics.  I had not thought of implementing it by running the ADC at a constant rate and randomly decimating the output.   That should be very straight forward to implement.  I don't think that would get a BW gain, but it would eliminate the need for anti-alias filters and reduce the amount of memory needed to store a long trace.  But it might not be worth the computational overhead. 

The acceptable jitter in the datasheet is based on a  Wiener-Shannon-Nyquist analysis.  This is *very* different.

Now if the rest of my parts (power connectors, gigabit switch, etc)  will just show up so I can assemble my dev platform.  I was going to mount the Zybo and BeagleBoard yesterday, but decided to hold off until I had the ethernet switch and USB hubs on hand.  I do *not* want to repeat my usual mistake of not allowing enough space.

In any case, it should be fun.  It's a not quite a dissertation grade project (it's already been done) but it is state of the art.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #20 on: February 27, 2018, 09:18:49 pm »
If you already have a scope then why not simply retrieve the data from it and skip samples randomly? It can sample at 1Gs/s so you have a 1ns granularity.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline langwadt

  • Super Contributor
  • ***
  • Posts: 4422
  • Country: dk
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #21 on: February 27, 2018, 10:57:35 pm »
You still need to answer the basic questions regarding your requirements.
What is the maximum and minimum sample to sample delay that you will need for your application and what granularity (step size) can you tolerate in this delay?
You cannot take a modern ADC that is meant to run at 500 Msps and feed it with a clock that has a cycle to cycle jitter of 1 ns.
I'm wondering why the pipeline needs a steady clock at all. A single sample & hold is used to sample the input signal. After this there are several internal sample & hold stages but these don't have to rely on the clock at all. IMHO everything should work just fine for a long as the maximum clock frequency doesn't exceed the specification of the ADC. I think this is something which needs to be tested before jumping to conclusions.

 On the assumption of a 5-10x compression and an effective sample rate of 2 nS,  a rough guess is that the samples will be at intervals in the range of 2-100 nS.  I'll need to do some numerical modeling to nail it down more precisely.  The Gaussian distributed sample intervals is just a guess.  A Poisson distribution might work.  I've not read the Georgia Tech dissertation yet.  I just skimmed though it to see what had been done in a general sense.  The comment by @KrudyZ about the stage to stage charge transfer would make regular sampling and decimation very desirable.  Otherwise I'd need to  track the charge transfer at each stage for each clock period and apply a correction factor  That might  be quite a task. .  In any case, I need to generate a Mersenne Twister quality PRNG in the PL fabric and I need to create a AXI DMA engine fed by an interrupt from a fabric FIFO.  Both of those should be pretty easy to find as examples.

Because this is a very different mode of operation than TI's designers had in mind, the only way to find out is to try it in hardware.  That's what I bought the GDS-2072E  for and why I'm limited to the TI ADC08DL502.  I don't see much point in speculating about whether it will work or not with the hardware I have.  The dissertation used a custom board which is more work than I want to undertake.

The mathematics say it will work, but the hardware may decide it does not agree to the terms required by the mathematics.  I had not thought of implementing it by running the ADC at a constant rate and randomly decimating the output.   That should be very straight forward to implement.  I don't think that would get a BW gain, but it would eliminate the need for anti-alias filters and reduce the amount of memory needed to store a long trace.  But it might not be worth the computational overhead. 

The acceptable jitter in the datasheet is based on a  Wiener-Shannon-Nyquist analysis.  This is *very* different.

Now if the rest of my parts (power connectors, gigabit switch, etc)  will just show up so I can assemble my dev platform.  I was going to mount the Zybo and BeagleBoard yesterday, but decided to hold off until I had the ethernet switch and USB hubs on hand.  I do *not* want to repeat my usual mistake of not allowing enough space.

In any case, it should be fun.  It's a not quite a dissertation grade project (it's already been done) but it is state of the art.

I'd like a ELI5 on how throwing away 90% of the data will not lose any information in the general case like an oscilloscope

random interleaved sampling, sequential sampling, equivalent-time etc. sampling scopes work when you know you have a
repetitive signal



 

Offline rhbTopic starter

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #22 on: February 27, 2018, 11:45:27 pm »
If you already have a scope then why not simply retrieve the data from it and skip samples randomly? It can sample at 1Gs/s so you have a 1ns granularity.

What would that get me?  The question isn't whether compressive sampling works.  It's does it provide something of practical use in the context of a DSO?

My first phase project is to implement a Zynq IP that provides:

regular ADC sampling of 1 8 bit channel to 1 GS/S
regular ADC sampling of 2 8 bit channels to 500 MS/S
run length compressed LA sampling to 1 GS/S for 8 bits
run length compressed LA sampling to 500 MS/S for 16 bits
compressive sampling of 1 or two ADC channels equivalent to the regular sampling

very likely this will require more than one IP block in the PL.  But I don't know that yet.  The first step is to prototype it on the Zybo.  If it performs reasonably well  I'll load it on the  2072E and test it there with a real ADC.  The compute requirement is significant.  So it may not be viable on the Zynq.  But it doesn't entail a huge amount of time or money and so far as I know, only one person has done this.  But if a company like Keysight has, they won't say until they announce a product.

There are two drivers here.  I want complete control of my scope.  The other is the desire to do some serious R&D level work.  I was doing some very exotic, cutting edge work for a super major oil company.  When I finished the project I terminated my contract and moved from Houston to Arkansas to look after my aging parents.   Dad passed away a year later and Mother in 2015.  I'm hoping that a successful implementation of a compressive sensing DSO might get me some work.  In the current state of the oil patch there is no work for people over 55.  I've finally accepted that I'll never get to do the stuff I used to do again.  I have friends who also want to work because they enjoy it,  but no one will hire them.

@lanfwadt  What the hell is an ELI5?  As for whether what I'm talking about is valid, I didn't make this up.  Emmanuel Candes, now a department chair at Stanford, but at the time at CalTech, and David Donoho of Stanford made  the major breakthroughs.  As for the limitations on what kind of signal, read the section on basis pursuit in Mallat's 3rd ed. Candes first experiment was to try to recover a signal which was a sparse combination of sinusoids and spikes.  I was solving problems involving the heat equation which has infinite sums of exponentials.  Those are not repetitive.  When I realized that I was successfully solving problems that were impossible by all that I had been taught and done for 30 years I got very curious how that was possible.  The price tag for finding out was reading  2000-3000 pages of the most complex mathematics I've ever read.  But I feel well rewarded even if I never make a nickel from the effort.  It's *really* cool.  It's like seeing a beautiful sunset or a gorgeous woman.  Just the experience is wonderful.
 

Offline langwadt

  • Super Contributor
  • ***
  • Posts: 4422
  • Country: dk
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #23 on: February 28, 2018, 12:04:45 am »
If you already have a scope then why not simply retrieve the data from it and skip samples randomly? It can sample at 1Gs/s so you have a 1ns granularity.

What would that get me?  The question isn't whether compressive sampling works.  It's does it provide something of practical use in the context of a DSO?

My first phase project is to implement a Zynq IP that provides:

regular ADC sampling of 1 8 bit channel to 1 GS/S
regular ADC sampling of 2 8 bit channels to 500 MS/S
run length compressed LA sampling to 1 GS/S for 8 bits
run length compressed LA sampling to 500 MS/S for 16 bits
compressive sampling of 1 or two ADC channels equivalent to the regular sampling

very likely this will require more than one IP block in the PL.  But I don't know that yet.  The first step is to prototype it on the Zybo.  If it performs reasonably well  I'll load it on the  2072E and test it there with a real ADC.  The compute requirement is significant.  So it may not be viable on the Zynq.  But it doesn't entail a huge amount of time or money and so far as I know, only one person has done this.  But if a company like Keysight has, they won't say until they announce a product.

There are two drivers here.  I want complete control of my scope.  The other is the desire to do some serious R&D level work.  I was doing some very exotic, cutting edge work for a super major oil company.  When I finished the project I terminated my contract and moved from Houston to Arkansas to look after my aging parents.   Dad passed away a year later and Mother in 2015.  I'm hoping that a successful implementation of a compressive sensing DSO might get me some work.  In the current state of the oil patch there is no work for people over 55.  I've finally accepted that I'll never get to do the stuff I used to do again.  I have friends who also want to work because they enjoy it,  but no one will hire them.

@lanfwadt  What the hell is an ELI5?  As for whether what I'm talking about is valid, I didn't make this up.  Emmanuel Candes, now a department chair at Stanford, but at the time at CalTech, and David Donoho of Stanford made  the major breakthroughs.  As for the limitations on what kind of signal, read the section on basis pursuit in Mallat's 3rd ed. Candes first experiment was to try to recover a signal which was a sparse combination of sinusoids and spikes.  I was solving problems involving the heat equation which has infinite sums of exponentials.  Those are not repetitive.  When I realized that I was successfully solving problems that were impossible by all that I had been taught and done for 30 years I got very curious how that was possible.  The price tag for finding out was reading  2000-3000 pages of the most complex mathematics I've ever read.  But I feel well rewarded even if I never make a nickel from the effort.  It's *really* cool.  It's like seeing a beautiful sunset or a gorgeous woman.  Just the experience is wonderful.

sorry, "Explain Like I'm 5 years old"

I can see how you can use math to compress data when you have prior knowledge that the signal has less entropy than the
raw regular samples can contain

but I don't see how you can do that for the general case like an oscilloscope
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Fast ADC sampling at random intervals with a Zynq?
« Reply #24 on: February 28, 2018, 12:26:09 am »
but I don't see how you can do that for the general case like an oscilloscope

Me neither. Some data are compressible, some are not. If a signal can be characterized by regular sampling containing N samples, which, in general case, are random and uncorrelated, then it is absolutely impossible to represent the same signal with N/16 samples keeping the same level of accuracy. Such compression may only be possible if the original samples are not purely random and uncorrelated, but rather restricted in some way, such as limited in bandwidth, periodic etc.

For example, if you compress VHDL code into a ZIP file, you can make it much smaller, but if you try to do the same with random bytes, it will not compress at all.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf