| Products > Test Equipment |
| is it true, oscilloscope must reach at least 4x observed freq? |
| << < (13/21) > >> |
| Fungus:
--- Quote from: switchabl on September 13, 2022, 05:29:49 pm ---No, I realize what he did. --- End quote --- To be fair: There was a caption that said "2000 sps (2x frequency)" above the image. |
| robert.rozee:
--- Quote from: 2N3055 on September 13, 2022, 04:53:54 pm ---No it probably means he captured this at extremely long timebases like 1sec/div and then changed timebase back to 500us/div with stopped acquisition to provoke this effect. --- End quote --- exactly: use timebase to set acquisition rate (2x frequency, 5x, 10x, 20x, 50x, 100x, 200x), allow buffer to fill, stop, then adjust timebase to set 500us/div. in all cases 14kpts of buffer was more than sufficient. this simulated a DSO measuring a 100MHz (as specified by original poster) square wave, with a maximum sample rate of 20Gsps, and unencumbered by any bandwidth limitations on the analog front-end. i would have used an actual 100MHz signal and my 20Gsps DSO, but it is currently in at the shop having the oil changed and the fluffy dice re-calibrated... cheers, rob :-) (image from: https://en.wikipedia.org/wiki/Fuzzy_dice#/media/File:1958_Ambassador_4-d_hardtop_fuzzy_dice.jpg) |
| 2N3055:
--- Quote from: switchabl on September 13, 2022, 05:29:49 pm --- --- Quote from: nctnico on September 13, 2022, 04:25:16 pm ---IMHO switching to linear interpolation isn't a good solution either. It is entirely possible that the signal you are looking at still meets Nyquist at a lower sampling rate. The oscilloscope simply doesn't know that. This is a typical case where the user needs to know what he/she is doing. --- End quote --- It is not ideal but I think it makes sense as a default. The oscillscope can't know, so it shouldn't guess but instead use the safest option. I am not saying you shouldn't be able to override it if you want to. If you have a large touch-screen, it might be nice to have a little warning symbol pop up that you can tap to change the setting or something. I don't know if anyone actually has that. I mean if you actually forced a super low sample-rate manually, I guess you had it coming and are probably getting what you wanted. But if just did a single-shot at a long timebase, the sample-rate dropped automatically and then you zoom in, chances are you didn't want to see massive ringing that isn't actually there. --- Quote from: 2N3055 on September 13, 2022, 04:53:54 pm ---While i understand he did it to explain sampling, without explaining what was done, people here started discussing how scope is performing poorly. Any scope would do this even Keysights mentioned if driven this way.. --- End quote --- No, I realize what he did. I was just surprised it doesn't turn off sinc-interpolation when you do that (I am sure you could do that manually). With the Keysights this actually happens more often because they don't have a lot of sample memory, so I know they switch to linear interpolation and don't do this. --- End quote --- If you are doing long single capture you don't set scope for smallest memory setting.... He could have set scope for 14MPts and have 1000X better sampling rate. If you want to capture all possible glitches use peak detect mode.. For single captures like that I set scopes for all it has (200-250 Mpts, would use more if available, although it gets to the point of diminishing returns after a while) and I can get lots of data at full sample rate... As for linear/sinc interpolation, that is a matter of visual aesthetics in this case.. Both are showing nonsense... And as I said, you could set Siglent for dot mode and get "sort of" ETS and it would actually show signal correctly (if signal is repetitive, of course) .. Or it would show single dots and give you info on sparsity of data. So you know that you should up the sampling rate... |
| The Electrician:
I figured out what robert.rozee was doing a couple of hours ago, and did the same thing on my Agilent DSO5054 scope. I set the timebase to 50 seconds/division and used the probe cal signal as input. I went back to watching the Queens last trip to London while wating for the buffer to fill up. I then switched the timebase to 500 uS/div and set the display to dot mode. Here's what that looks like; the dots are really hard to see, but they're there: ] Here it is with Agilent's linear interpolation turned on: If you can see sharp corners in linear interpolation mode where there should be smooth curves, the signal is probably sub-sampled. Of course, some sharp corners should be there, such as the corners of a square wave, but if a sine wave looks like it's made of only a few straight lines, that's what I'm referring to. |
| nctnico:
The signal isn't sub-sampled. Just turn sin x/x reconstruction on and you'll see the actual waveform. Linear (vector) display is one of the most useless features of a DSO IMHO. --- Quote from: 2N3055 on September 13, 2022, 07:44:02 pm ---As for linear/sinc interpolation, that is a matter of visual aesthetics in this case.. Both are showing nonsense... --- End quote --- No, not at all. Both methods will show a waveform that connects all the sample points by definition. Remember an oscilloscope is there to visualise a waveform. It makes little sense to turn sin x/x off when there are few sample points to work with; it is there to help your brains to see and interpret the signal. However the assumption is that the oscilloscope is setup in a way the signal is sampled correctly (either through a high enough samplerate or using ETS on a periodic signal). |
| Navigation |
| Message Index |
| Next page |
| Previous page |