Electronics > Projects, Designs, and Technical Stuff
Potential DIY Oscilloscope project, screen refresh rate?
<< < (3/5) > >>
rhb:
The Fourier transform of a boxcar in frequency is  a sinc(t).  The only way to eliminate the ringing in time is to not have a sharp edge.  In seismic work cosine taper edges are typical, but the filter is being specified and applied in the frequency domain.  The usual specification is a trapezoid, f0, f1, f2, f3.  The actual implementation commonly uses a cosine taper for the slopes of the trapezoid.  Filters to flatten the spectrum such as the classic Wiener prediction error filter are designed differently.

All the work is being done on recorded data.  So it's a lot easier to do basic stuff.  Of course, when you want to create a 3D image from 10 TB of data, it gets rather more demanding.  At a very basic level each sample in the output volume requires a mathematical summation of a 150K-500K input samples.  That is a week or two with tens of thousands of CPU cores.

Good phase response is not particularly a problem so long as one is able to accept the latency. That's a problem in audio, but not in a DSO.

I'd appreciate an explanation of your estimate of the number of taps.  That's a lot of zeros in the transfer function.  I've been reading "VLSI Digital Signal Processing Systems" by K.K. Parhi and "FPGA-based Implementation of signal Processing Systems" by Roger Woods et al to help me make the transition from DSP in recorded time to DSP in real time.  Aside from  having very different constraints, the terminology in the seismic and EE communities is completely different.

Thomas H. Lee presents an example of an analog  maximally flat phase low pass filter in "Planar Microwave Engineering" so I don't see any serious obstacle other than the mathematics of the Fourier transform.

The Gaussian taper pass band edge gets a lot of lip service in EE, but rather less use.  However, sech(x) is symmetric in time and frequency, so it is a good candidate for consideration,

If you are using regular sampling, a low pass anti-alias filter is an absolute necessity.  The aliasing arises because the Fourier transform of a spike series is a spike series.  If the sampling is sufficiently random, then the transform of the sampling interval is a spike in frequency and aliasing doesn't occur.

I spent most of my time from 2013 to 2016 studying compressive sensing.  In the process I read "A Mathematical Introduction to Compressive Sensing" by Foucart and Rauhut twice and "A Wavelet Tour of Signal Processing" by Mallat once.followed by the original papers by Candes, Donoho, Tanner et al.  In total about 3000 pages of the most complex mathematics I've ever read.  I had to read F&R twice bacause I really needed the mathematical foundations presented by Mallat.  Subsequently as a consequence of some papers by Donoho I read quite a bit from "Convex Polytopes" by Grunbaum and "Lectures on Polytopes" by Ziegler to gain a better understanding of a fast algorithm for solving Ax=y using an L1 norm.

I plan to revisit all that at some point, but I need to master the FPGA implementation of FIR and IIR filters at high clock rates first.  I've bought a Tek 11801 and four 20 GHz dual channel, 13 ps rise time sampling heads so I can measure bit skew rather than rely on Vivado and Quartus to calculate it correctly.  Just constructing an 8 line fixture with lines matched to a few ps is going to be a challenge.

The  HMCAD1520 offers 8, 12 and 14 bit sampling at different clock rates, so filtering that in an FPGA  real estate efficient manner is going to be a challenge.  An additional requirement is arbitrary, user specified filter pipeline as the LeCroy offers.  So I will be attempting to use the partial reconfiguration feature of the Zynq line.

My current focus is the FPGA input to DDR section.  I am investigating the anti-alias filter aspect to the extent that I must know what the signal passband looks like to do the post ADC processing, but I'm not going past the filter shape into the details of the actual analog filter implementation, attenuator responses, etc.  I'm trying to eat an elephant, so I'm taking it one bite at a time.

It appears that we share a common interest with enough overlap in skill sets to be able to communicate, but with many critical skills the other lacks.  So I'm hopeful we can collaborate on making this happen sooner rather than later.

I have a large DSP library going back to the "The Interpolation, Extrapolation and Smoothing of Stationary Time Series" by Norbert Wiener which is where DSP starts and was trained by a member of Wiener's Geophysical Analysis Group.  Of all the books, I think "An Introduction to Digital Signal Processing" by John H. Karl is probably the best general presentation.   The classic text is "Geophysical Signal Analysis" by Robinson and Treitel, the most prominent members of Wiener's GAG.  They literally wrote the book on DSP in the 50's and 60's in the form of a series of professional papers which were published as "The Robinson and Treitel Reader" by Seismograph Service Corporation.  "Geophysical Signal Analysis" is those papers reworked into a book.   R&T focuses quite a lot on the problem of water layer reverberation as that was the driving application in seismic work.  Hence, my suggestion of Karl instead.

In closing, the screen refresh rate is limited by the display.  Even at 120 Hz that's an eternity compared to the data sample rates.
Boscoe:

--- Quote from: rhb on April 22, 2019, 01:52:34 pm ---The Fourier transform of a boxcar in frequency is  a sinc(t).  The only way to eliminate the ringing in time is to not have a sharp edge.  In seismic work cosine taper edges are typical, but the filter is being specified and applied in the frequency domain.  The usual specification is a trapezoid, f0, f1, f2, f3.  The actual implementation commonly uses a cosine taper for the slopes of the trapezoid.  Filters to flatten the spectrum such as the classic Wiener prediction error filter are designed differently.

All the work is being done on recorded data.  So it's a lot easier to do basic stuff.  Of course, when you want to create a 3D image from 10 TB of data, it gets rather more demanding.  At a very basic level each sample in the output volume requires a mathematical summation of a 150K-500K input samples.  That is a week or two with tens of thousands of CPU cores.

Good phase response is not particularly a problem so long as one is able to accept the latency. That's a problem in audio, but not in a DSO.

I'd appreciate an explanation of your estimate of the number of taps.  That's a lot of zeros in the transfer function.  I've been reading "VLSI Digital Signal Processing Systems" by K.K. Parhi and "FPGA-based Implementation of signal Processing Systems" by Roger Woods et al to help me make the transition from DSP in recorded time to DSP in real time.  Aside from  having very different constraints, the terminology in the seismic and EE communities is completely different.

--- End quote ---

I have to admit, most of this is going over my head. With regards to the phase response, is this not import to maintain coherent rising edges? Isn't it the latency we're not concerned about?


--- Quote ---Thomas H. Lee presents an example of an analog  maximally flat phase low pass filter in "Planar Microwave Engineering" so I don't see any serious obstacle other than the mathematics of the Fourier transform.

The Gaussian taper pass band edge gets a lot of lip service in EE, but rather less use.  However, sech(x) is symmetric in time and frequency, so it is a good candidate for consideration,

If you are using regular sampling, a low pass anti-alias filter is an absolute necessity.  The aliasing arises because the Fourier transform of a spike series is a spike series.  If the sampling is sufficiently random, then the transform of the sampling interval is a spike in frequency and aliasing doesn't occur.

I spent most of my time from 2013 to 2016 studying compressive sensing.  In the process I read "A Mathematical Introduction to Compressive Sensing" by Foucart and Rauhut twice and "A Wavelet Tour of Signal Processing" by Mallat once.followed by the original papers by Candes, Donoho, Tanner et al.  In total about 3000 pages of the most complex mathematics I've ever read.  I had to read F&R twice bacause I really needed the mathematical foundations presented by Mallat.  Subsequently as a consequence of some papers by Donoho I read quite a bit from "Convex Polytopes" by Grunbaum and "Lectures on Polytopes" by Ziegler to gain a better understanding of a fast algorithm for solving Ax=y using an L1 norm.

--- End quote ---

This reading is very impressive. I hope I get to spend this much time learning at some point. I'll be sure to visit "Planar Microwave Engineering" soon. I really regret not taking that DSP course at uni!


--- Quote ---I plan to revisit all that at some point, but I need to master the FPGA implementation of FIR and IIR filters at high clock rates first.  I've bought a Tek 11801 and four 20 GHz dual channel, 13 ps rise time sampling heads so I can measure bit skew rather than rely on Vivado and Quartus to calculate it correctly.  Just constructing an 8 line fixture with lines matched to a few ps is going to be a challenge.

--- End quote ---

Although having only done a skim read, the paper in my previous post covers it all I believe. In my experience with FPGAs, you'll save yourself a lot of time and grey hair by designing and simulating the circuit before committing to debugging hardware.


--- Quote ---The  HMCAD1520 offers 8, 12 and 14 bit sampling at different clock rates, so filtering that in an FPGA  real estate efficient manner is going to be a challenge.  An additional requirement is arbitrary, user specified filter pipeline as the LeCroy offers.  So I will be attempting to use the partial reconfiguration feature of the Zynq line.

My current focus is the FPGA input to DDR section.  I am investigating the anti-alias filter aspect to the extent that I must know what the signal passband looks like to do the post ADC processing, but I'm not going past the filter shape into the details of the actual analog filter implementation, attenuator responses, etc.  I'm trying to eat an elephant, so I'm taking it one bite at a time.

It appears that we share a common interest with enough overlap in skill sets to be able to communicate, but with many critical skills the other lacks.  So I'm hopeful we can collaborate on making this happen sooner rather than later.

I have a large DSP library going back to the "The Interpolation, Extrapolation and Smoothing of Stationary Time Series" by Norbert Wiener which is where DSP starts and was trained by a member of Wiener's Geophysical Analysis Group.  Of all the books, I think "An Introduction to Digital Signal Processing" by John H. Karl is probably the best general presentation.   The classic text is "Geophysical Signal Analysis" by Robinson and Treitel, the most prominent members of Wiener's GAG.  They literally wrote the book on DSP in the 50's and 60's in the form of a series of professional papers which were published as "The Robinson and Treitel Reader" by Seismograph Service Corporation.  "Geophysical Signal Analysis" is those papers reworked into a book.   R&T focuses quite a lot on the problem of water layer reverberation as that was the driving application in seismic work.  Hence, my suggestion of Karl instead.

In closing, the screen refresh rate is limited by the display.  Even at 120 Hz that's an eternity compared to the data sample rates.

--- End quote ---

That ADC does look like a great candidate, seems to be everything rolled into one and at a good price, too. I was looking at the TI solutions. They have a 12bit 1GSPS part for arounf £300, £85 is much better!

DDR deserialisers are a done deal from the likes of Xilinx and Altera. Make sure you have done your homework and designed a good PCB then you can simply go through the DDR wizard. Yes, there is a lot of work here and I understand what you mean, I'm also trying to understand this aspect.

I'd be more than happy to collaborate although the amount of time I'll be able to commit would be erratic at best.

Is this a physical library? I'd love to get the chance to read some of these topics.

rhb:
Linear phase is exactly the desired  waveform shape with a constant delay.  Most of your problems with what I wrote are I'm a geophysicist and I use the terminology of mathematicians as that's what the people I learned from were taught.   You're used to the EE description which is what is currently baffling me.

I recently received an autobiography Wiener wrote.  I've only glanced at it, but it looks as if it will be fun to read.

Hands down the best reference on classic Wiener-Shannon DSP is "Random Data" by Bendat and Piersol.  However, it is strictly a math book.  If you're doing work with recorded data it's perfect.  If you're doing real time there is a *lot* more to learn.

I bought Parhi because Woods et al referenced it so much.  It is fantastic.  It goes into great detail about all the various ways you can lay out parallel and serial filter implementations.  I can't recommend it more highly.  I plan to implement and test all the various topologies on the Zynq & Cyclone V and compare Vivado and Quartus timings to what I measure.  Lee is wonderful to read.  I also ordered his other book and a copy of "Microwave Engineering" by Pozar.  There are a couple more EM texts I am considering, but I want to review what I already have first.

I'd *much* rather have access to all the data worldwide  of a super major oil company so I could analyze and model sedimentary rock properties for the whole world.  But I can't get that, so this is the next best thing that I could find.  I'll be 66 shortly.  I've got to be learning something new.  So I'm pursuing a DIY PhD in EE. No real point to it except to have fun.  Fortunately, 1990 era test gear is fairly cheap.  So I now have a lab that would have cost $200K+ for less than $15K.  Of course, the best part is I'm in charge and I can do anything my attention deficit disorder points me at.

It is a physical 5000+ volume library.  I just got rid of 500 lbs of old journals to make more room for books.  It's one of the great pleasures of my life.  I can walk in the library, pull 3-4 books off the shelf and find almost any answer.  It's all on commercial library shelving in what was the 2 car garage.  I've never had occasion to use them, but I have the full 5 volume set of the CalTech  Bateman Manuscript Project  edited by Erdelyi et al.  It's the ne plus ultra of integral tables.  Cited by everyone, but almost pure unobtainium.  And quite a few other classic monographs that are almost impossible to get a hold of.  It has cost me the price of a good house, but I've also made a lot of money by being the person who knew the answer on Monday morning to a question posed in a meeting late on Friday. Usually with only a few hours effort to hunt through the library for the solution.  It presented such an illusion of brilliance on my part I took to saying, "Obviously your have me confused with someone who knows what he is doing."

The wonderful thing about real books is how fast you can skim through them and how much you can locate from physical memory that something is in a certain book in the 1st third at the bottom of the RH page.
ebastler:

--- Quote from: Boscoe on April 21, 2019, 09:31:47 am ---I want to do a single channel USB scope at 1GHz.

--- End quote ---

A lot of good and interesting discussion above, but I think nobody has commented on this aspect yet:

I would strongly advise to design a two-channel unit. There are so many applications where the relationship between two signals matters: Be it phase relationship in the analog domain, or correlations between digital signals. Having more than two channels is mostly a convenience (with rare exceptions). But having only a single channel is severely limiting in my experience. There's a reason why, well back in the age of analog scopes, single-channel units have become almost extinct!
Mechatrommer:

--- Quote from: rhb on April 22, 2019, 12:06:58 am ---One can, in fact, sample at 10-20% of Nyquist sampling without aliasing using a technique called "compressive sensing"...
A compressive sensing DSO was constructed at Georgia Tech as part of the work for a PhD granted in 2014.

--- End quote ---
so, now we have foundation to sample 1GHz BW signal (highest harmonics of possibly distorted signal disintegrity, not mere 1GHz pure sine) at 100MSps rate, 10GHz at 1GSps etc etc... it must be cheap! than whats currently offered by constant time interval Nyquist sampling rate technique. my question... where is it now? or whats the progress (current in mass production maybe?) after this 5 years, i cant wait.
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod