| Electronics > Projects, Designs, and Technical Stuff |
| Usefulness of different TDR designs? |
| << < (6/10) > >> |
| David Hess:
--- Quote from: rhb on April 22, 2019, 11:23:15 pm ---LeCroy offers a RIS function. I cannot discern any utility to it, but in all fairness I've not spent a lot of time trying to devise a test case. I've not played with a Tek sampler that had the feature. --- End quote --- The RIS described here by LeCroy is just what Tektronix and others would call equivalent time sampling or random equivalent time sampling. (1) I remember finding this last time I searched on this subject. --- Quote ---Most certainly no one has implemented what Donoho and Candes described in a commercial product. You cannot get a PhD by repeating someone else's work. And failure is not an option. If you fail you have to start over on a new project. At least that's the case at reputable schools. --- End quote --- I may have been thinking of what HP did in the HP 54645A/D (2) in 1997 which is about the time that I remember although this is not the application note I remember. This is not the document I remember either. I think HP either sent me a brochure about it at the time or I read about it in a trade magazine. (1) It is really annoying when trying to use this if the waveform synchronizes with the sample clock; you might think that impossible but I have had it happen. Universal counters face the same problem and may deliberately dither their timebase with noise to avoid it. (2) Also here. |
| rhb:
Thanks for the LeCroy link. That's a much better description than the LC648DLX manual. It also makes clear why it doesn't work very well aside from not having a relay to remove the anti-alias filter from the signal path. What LeCroy is doing is absolutely useless so far as I have been able to tell. My square wave pulser from Leo has a 36 ps or better rise time. There is *no* difference in the rise time measured using RIS or regular sampling. It's ~250 ps either way. I was quite disappointed. However, it may also be related to clock granularity. The LeCroy is my fastest scope other than the 11801 which is a completely different sort of beast. Donoho did the mathematical proof of compressive sensing in 2004. I was quite stunned when I discovered his work in 2013. I was solving massively underdetermined inverse problems following Mallat's discussion of basis pursuit when it sank in that I'd been taught you can't do that. So I had to find out how this could be happening. That led me to Foucart and Rauhut and the most difficult math I've ever encountered. One of Donoho's proofs is 15 pages! Fortunately the other two theorems in the paper were only 2-3 sentences. Donoho has a great sense of humor, so he commented on it at the end of the first proof. In the case of compressive sampling the wave form does not need to be repetitive. As I understand it, the sole requirement is that the sample intervals be completely uncorrelated. I'd love to talk to someone with the fortitude to read Foucart and Rauhut. With Mallat as an essential prerequisite, it's a lot of work. |
| David Hess:
--- Quote from: rhb on April 23, 2019, 01:07:47 am ---In the case of compressive sampling the wave form does not need to be repetitive. As I understand it, the sole requirement is that the sample intervals be completely uncorrelated. I'd love to talk to someone with the fortitude to read Foucart and Rauhut. With Mallat as an essential prerequisite, it's a lot of work. --- End quote --- That is the part I remember. Randomizing the sample time was suppose to suppresses aliasing on single shot captures and that is sort of what the HP notes I linked claim. The document from HP that I remember however went into a lot more detail. |
| vk6zgo:
>:( --- Quote from: bson on April 20, 2019, 07:44:05 am ---If it really matters you shouldn't rely on a 0.66 vop; instead, determine it from a known length of the same cable (using the TDR), preferably off the same spool or batch. --- End quote --- Alternatively, you can use a CW signal from a signal generator, monitoring the outgoing signal to the cable with an Oscilloscope.(Or a swept signal if you want to be fancy). At 1/4 wavelength, an o/c far end will give you a (sharp) null in the display, and a s/c, a peak. I tested my reel of RG58 several times this way, & the cable length always came out correct for 0.66. Now I just use the tape measure (for that particular reel). |
| rhb:
Those provide a better idea of what RIS/ETS was about. I was taught that ETS implied that you had to provide an external band pass filter to limit the BW to the Shannon-Nyquist limit. I was a bit disturbed when I discovered that the EE community were not being very scrupulous about the mathematical niceties. My LeCroy samples at 2 GSa/s with a 1.5 GHz BW. So everything above 500 MHz is aliased. Gad!!! The attached figure is from "A Mathematical Introduction to Compressive Sensing" by Foucart and Rauhut. The top left shows the amplitude spectrum of the signal which has 5 non-zero spectral components. The lower left shows the time domain signal corresponding to the spectrum above it. The dots in the lower left show the 16 sample points selected randomly. The upper right is the result from attempting to compute the amplitude spectrum using a standard L2 norm discrete Fourier transform. The lower right shows the spectrum *exactly* recovered using an L1 discrete Fourier transform. When I was working on Bin Liu's minimum weighted norm regularization (Sacchi's student) I attempted to regularize data by means of an irregularly sampled discrete Fourier transform. In my case the average sampling met the Nyquist criterion, it was simply that the samples were not regular. The L2 norm inherent in the DFT produced a broad nasty mess for a single sine wave. That and the realization that the process appeared to be severely dip limited led me to abandon the work. It was a complex algorithm and would have been at least 6-9 months of full time work to implement. To put things in slightly different terms. The upper right figure is the results of attempting to solve Ax=y where x is a vector of Fourier coefficients using the traditional L2 (least squares) norm. The lower right is solving the exact same problem except using an L1 (least summed absolute error) norm. When I was at Austin in 85-89 we were so constrained by the 125K multiply-add rate of the 11/780 that serious work using an L1 norm was unthinkable. We did occasionally use SVD and truncate the eigenvalues, but without understanding that exact reconstruction possible in the noise free case. The operations research people were solving L1 norm problems via linear programming, but there was no interaction between OR and seismic that I am aware of. And in any case, the sheer volume of data we had to contend with would have made serious work impractical. We were already contending with 5-8 day run times for fairly simple stuff. Compressive sensing arose out of Donoho's recognition that if you could acquire data in the traditional manner and compress it to 1% of the initial volume, you could skip the traditional acquisition step. Candes had already proved that exact reconstruction was possible in the noise free case with fewer samples. The two of them were firing papers back and forth over the course of 2004. Donoho was at Stanford and Candes was at CalTech. It's really fun to watch the ideas fly back and forth. Both are excellent writers and just reading the abstracts and introductions will give a strong sense of how things developed without delving into the complexities of the mathematics. Have Fun! Reg |
| Navigation |
| Message Index |
| Next page |
| Previous page |