| Electronics > Projects, Designs, and Technical Stuff |
| Potential DIY Oscilloscope project, screen refresh rate? |
| << < (4/5) > >> |
| rhb:
I plan to work on it eventually, but I'm starting with the basic regular sampling model. The big issue with compressive sensing is the CPU power needed. It's significant. It's certainly usable for a very high BW one shot, but that may be all at the moment. and you might have to wait 20 minutes to see your trace. I have not read any of the literature since 2016. The initial MRI implementations were taking several hours to process the data which led to an intense amount of work to speed that up. I know that they were able to get large speedups, but don't recall the details. And not the faintest inkling of whether they would meet the speed requirements of a fast DSO. Donoho originated the concept and provided the mathematical proof in 2004. It took 10 years for someone to do a DSO implementation for a PhD project. So 2024 seems a reasonable time frame for a COTS version. I would love to have someone to discuss this with, but even the mathematicians I know turn pale when they look at the math. I know one Stanford PhD who is familiar with the math, but he doesn't want to deal with it unless he's getting paid $$$. The other couple of people I know are bound by confidentiality agreements and can't talk about it in significant detail. When you consider that it's 1-2 years of work to learn the subject starting from the level of a Stanford PhD in geophysics, that puts the bill for developing internal company expertise at $500K. So not sharing is not surprising. My understanding of the mathematics is that to get 10 GHz of BW at 1 GSa/s average sample rate requires a clock resolution of well under 100 ps in addition to solving an NP-hard problem. However, it's not that bad in practice. Donoho proved in September 2004 that the L1 solution to Ax=y where x is sparse can be solved in L1 time. But you still need the clock resolution so far as I can tell. He later made a connection to regular polytopes in N dimensional space. As the number of dimensions increases, the probability of any particular point lying on the convex hull approaches 1. So there is the potential for a very fast algorithm derived from computational geometry. |
| David Hess:
Rhb and I disagree about the aliasing in digital storage oscilloscopes. The typical Gaussian or Bessel response of -6dB/octave is eminently feasible and almost universally used except somewhere above 500 MHz where it becomes prohibitive to implement. This results in noticeable pulse distortion for bandwidth limited edges with sample rates of only 2.5 times but the added distortion is not too much greater than the expected transient response error anyway. If no reconstruction is used, then it is usually not even noticed at slower sweep speeds. A relatively easy way to avoid this if it will be a problem is to increase the sample rate to x5 or x10 where the added reconstruction error is likely *smaller* than the residual error in transient response. (1) Anti-alias filters are typically not used but equalization to preserve linear phase is. At lower frequencies, say 200 MHz and lower, nothing special is required except linear phase response in the passband which is easy enough. At higher frequencies, increasingly complex equalization and circuit topologies are required to get the needed Ft out of each stage and provide a linear phase response which results in a multi-pole response and faster roll-off. (1) In my experience, the error in unnoticeable under the most demanding conditions in 100 MHz instruments operating at 1 GS/s which was typically provided almost 30 years ago through ETS. That would be about 15 dB of attenuation at the Nyquist frequency. |
| rhb:
I'm sorry, but I've been doing oil industry DSP since 1982. We were doing DSP long before anyone else. Enders Robinson hand digitized seismic data in 1952. He then went on to apply the first Wiener prediction error aka deconvolution aka FIR filter with a desk calculator. If what you are saying is accurate, then the EEs responsible should be shot and dumped in a ditch. Preferably before they reproduce. Anti-alias filters are an absolute requirement no matter what the sample rate. Harry Nyquist of Bell Labs published a paper on the aliasing issue in 1928. Nyquist was no slouch. Some patent attorneys at Bell Labs commissioned a study to determine why certain scientists had more patents than others. The common factor? They regularly had breakfast or lunch with Nyquist in the company cafeteria (Sedra & Smith, "Microelectronics Circuits" 7th ed, p 875 citing "The Idea Factory" by Jon Gertner). There is no way you can guarantee that you do not have unexpected noise above Nyquist. Any decent intro to DSP discusses aliasing in the first chapter. ETS works *if and only if* the signal is band limited. Shannon is responsible for that result. I have yet to see *any* DSO with a Gaussian or Bessel step response. The Keysight MSOX3104T and the R&S RTM3104 certainly do not have them. R&S makes a claim to have them but the RTM3K does not. I had thought that for $20K I could buy one. I now know better. That's why I'm going to the ridiculous level of time and expense to develop FOSS DSO FW. I'm not happy and I am so unhappy I intend to do something about it. |
| Mechatrommer:
--- Quote from: rhb on April 22, 2019, 11:15:19 pm ---That's why I'm going to the ridiculous level of time and expense to develop FOSS DSO FW. I'm not happy and I am so unhappy I intend to do something about it. --- End quote --- well it looks like you have to wait for another decade(s) until the processing power catches up to speed up your 20 minutes mathematical calculation. or you can device parallel computation for it for FPGA/DSP implementation, or embed it in somesort of genetic algorithm, or... derive simplified formulation for it for practical application in single cored cpu level, much like how people derived FFT from DFT from real FT, or much like how the expert group came up with their jpeg standard from complex wavelet formulation. either way will require pHD grade study and time to do. i dont have detail knowledge about this stuffs, i myself can get easily pale pale when looking at the math esp when there is SET formulation, but i do enjoy reading those at surface level. but the way i see it, compressive sensing is not much difference from other compression method be it lossy or lossless, everything have their trade off, computation power in this case. since it manipulates (or "assumes") the sparcity of a signal, it is possible imho to capture from a full BW data and remove any noise (higher spectral) elements from the recorded data, much like how jpeg is created from raw 8 bit data of bitmaps. ie when reconstructed back, those will be loss (lossy) or will need heavier computation to regain whats left behind (higher elements, little loss). but be aware even lossless jpeg cant compress as much as lossy one, for densed data (matrix) it might not even possible (negative effect or inverse/larger size) for the compression to be applied at all, but then, by some "assumption" about the signal, any unecessary information (noise or higher BW signal) maybe ruled out (lossy), there is no free lunch however, ymmv. |
| rhb:
TANSTAFL It should be noted that algorithmic developments generally outperform hardware improvements. If you need to recover a long single shot waveform with a BW of 1 THZ, just on sheer data volume, compressive sensing becomes important enough to endure the processing delay. Compressive sensing is inherently lossy. It is expecting a certain level of uncorrelated noise. That is rejected because it is not coherent from sample to sample. However, there are a *lot* of variations depending upon your data and objectives. So far as I'm concerned, the work of Donoho and Candes is the greatest advance in applied mathematics since Wiener and Shannon. That may not mean much unless you spent your entire professional career applying the work of Wiener and Shannon. But it's a *really* big deal to me. |
| Navigation |
| Message Index |
| Next page |
| Previous page |