Products > Test Equipment

Oscilloscope Zoom Out Quirk

<< < (62/113) > >>

2N3055:
To me convoluted process is to set memory depth, mentally calculating what timescale it will be at target sample rate, mentally calculate how much of that I have to set as pretrigger to make sure I have enough pre trigger capture once I'm at the target timebase. Then I set target timebase, tightly set around trigger and then keep capturing until I find something interesting, and then, without any sense where exactly in the whole buffer I am i keep twiddling around with timebase looking at data searching for something interesting.

Problem is that Dave misunderstood (or ignored) original reason why Nico is such a proponent of this.
Nico uses it for slow, long samples, where he looks at long bursts of data on ,say, CAN bus.  He keeps looking at one packet or event, and when that even happens he wants to know what happened around it. He doesn't stop captures, but works in RUN/Normal triggering mode (not Auto), from trigger to trigger event, and he initiates data transfers from equipment manually, or has them set up to occur slow enough that he has time to manually stop scope if needed.
It is very deliberate, he doesn't mind that scope needs 1 second per capture to process, and it IS NOT interactive scope work. You cannot have interactive scope work with 2 triggers per second.

As I already documented, Keysight scope DOESN'T work like that (Nico uses his R&S RTM3000 for this) in RUN mode.  In RUN mode in between triggers, it has only screenfull of data available. Only after you STOP, it will gather all buffers and reassemble them. And if you are lucky that you are working in less 5 us/div, you will have data up to 20 us/div. If you are at 20 us/div and up (50us/div in single mode), there is NO data outside...

I had DS1074Z. I pretty much never used fixed memory length. I used AUTO mode pretty much exclusively. Setting scope to capture too long capture on every trigger makes scope slow. Every one. One of the reasons Tektronix get their bad reputation is manual memory settings. People put them to 20MS and then complaint how it non responsive. 
Suddenly that is a good thing. It is not. That being said I DID USE IT from time to time. For instance to look at gated PSU startup. Set it for MAX mem, set timebase for switch on event, look at it and then if there was something wrong, pull out to see what was happening around it. But with Picoscope's excellent zoom implementation, it behaves exactly the same the other way around.

Siglent 2000X+ seems to work exactly as Picoscope.  And it is not "badly implemented manual memory length", but a "auto memory management with maximum length" setting. It is there to control strategy to balance huge memory with how will sampling rate drop off as you go to slower timebases. Siglent should have better explained and/or named this option. Siglent doesn't have "fixed manual memory size"
Also, if it is possible, they should think of implementing Keysight like feature in user configurable option, that they give up history mode and do single shot captures in full length as set. That would make them almost exactly same as Keysight (same at slow timebases). They might even do "full Keysight" and set minimum sampled length.  Whatever.

That being said, if customers think this is necessary, manufacturers should try to address it.

nctnico:

--- Quote from: 2N3055 on June 11, 2020, 12:46:03 pm ---To me convoluted process is to set memory depth, mentally calculating what timescale it will be at target sample rate, mentally calculate how much of that I have to set as pretrigger to make sure I have enough pre trigger capture once I'm at the target timebase. Then I set target timebase, tightly set around trigger and then keep capturing until I find something interesting, and then, without any sense where exactly in the whole buffer I am i keep twiddling around with timebase looking at data searching for something interesting.

--- End quote ---
That is not how it works! If you don't get it then you don't get it but by riduculing other people's workflow you only make a fool out of yourself.

2N3055:

--- Quote from: nctnico on June 11, 2020, 12:57:22 pm ---
--- Quote from: 2N3055 on June 11, 2020, 12:46:03 pm ---To me convoluted process is to set memory depth, mentally calculating what timescale it will be at target sample rate, mentally calculate how much of that I have to set as pretrigger to make sure I have enough pre trigger capture once I'm at the target timebase. Then I set target timebase, tightly set around trigger and then keep capturing until I find something interesting, and then, without any sense where exactly in the whole buffer I am i keep twiddling around with timebase looking at data searching for something interesting.

--- End quote ---
That is not how it works! If you don't get it then you don't get it but by riduculing other people's workflow you only make a fool out of yourself.

--- End quote ---

It is exactly how it works. Unlike you, I like to have a guarantee I will capture something before I start doing it. You are doing this not because it is smart thing to do it, but because it is the lazy way, and you hate zoom mode. So you do it this way. If it works for you , that's nice.
I do it in opposite direction and get same results and find that much easier.
But both you, and now Dave apparently, try to sell this as some panacea, and will do that without any negative impact. Which is not true. It has consequences.
In my mind it is not that Siglent manages memory wrong way, but that zoom mode should be implemented to be more user friendly.  I LIKE how zoom mode has DETERMINISTIC dual time bases. They should shuffle windows differently and make them more configurable, but it is a display and user interface  problem, not capture problem.  This is exactly what R&S and Keysight does: it sets timebase (internaly) for long capture, and then shows you only part of captured waveform. It is just U/I that shows you part of waveform, using basically "virtual timebase".

And I get it. I read what you wrote on this topic before. 

By the by, it is you who is proponent of idea that whoever is not agreeing with you is stupid. And this is not first topic you had that condescending tone..
As I said before, you are wonderful engineer with waste experience. I respect that. And I agreed and disagreed with you on many topics. And I will keep doing that at my own conscience.

nctnico:

--- Quote from: 2N3055 on June 11, 2020, 01:24:40 pm ---
--- Quote from: nctnico on June 11, 2020, 12:57:22 pm ---
--- Quote from: 2N3055 on June 11, 2020, 12:46:03 pm ---To me convoluted process is to set memory depth, mentally calculating what timescale it will be at target sample rate, mentally calculate how much of that I have to set as pretrigger to make sure I have enough pre trigger capture once I'm at the target timebase. Then I set target timebase, tightly set around trigger and then keep capturing until I find something interesting, and then, without any sense where exactly in the whole buffer I am i keep twiddling around with timebase looking at data searching for something interesting.

--- End quote ---
That is not how it works! If you don't get it then you don't get it but by riduculing other people's workflow you only make a fool out of yourself.

--- End quote ---
It is exactly how it works. Unlike you, I like to have a guarantee I will capture something before I start doing it. You are doing this not because it is smart thing to do it, but because it is the lazy way, and you hate zoom mode. So you do it this way. If it works for you , that's nice.
I do it in opposite direction and get same results and find that much easier.

--- End quote ---
You are free to find a workflow easier. But it still means you go through way more steps to achieve that result than I do (which is also fine by me). The downsides you see is that you can not be 100% sure you capture an event due the limit of the captured time interval. I don't care about that (which some people may find weird or even offensive; I'm a 'the glass is half full' person). In some cases I have no idea of the time scale anyway. I could spend half an hour to try and look it up or spend 5 seconds to poke with a probe. If the event isn't there then the measurement at least showed the time scale of the signal and there are alternative ways available to capture a specific event so no worries and no time lost. But when the event is in the memory then it is a massive saving in time & effort because I did two steps in one: analyse the problem and found the cause. Or fixed the last bug and verified I introduced no new bugs in one go. For most of the stuff I work on 10Mpts of memory is deep enough to cover 9 out of 10 cases but more memory is always better. In the end the result counts; the journey not so much.

Elasia:

--- Quote from: 2N3055 on June 11, 2020, 01:24:40 pm ---
--- Quote from: nctnico on June 11, 2020, 12:57:22 pm ---
--- Quote from: 2N3055 on June 11, 2020, 12:46:03 pm ---To me convoluted process is to set memory depth, mentally calculating what timescale it will be at target sample rate, mentally calculate how much of that I have to set as pretrigger to make sure I have enough pre trigger capture once I'm at the target timebase. Then I set target timebase, tightly set around trigger and then keep capturing until I find something interesting, and then, without any sense where exactly in the whole buffer I am i keep twiddling around with timebase looking at data searching for something interesting.

--- End quote ---
That is not how it works! If you don't get it then you don't get it but by riduculing other people's workflow you only make a fool out of yourself.

--- End quote ---

It is exactly how it works. Unlike you, I like to have a guarantee I will capture something before I start doing it. You are doing this not because it is smart thing to do it, but because it is the lazy way, and you hate zoom mode. So you do it this way. If it works for you , that's nice.
I do it in opposite direction and get same results and find that much easier.
But both you, and now Dave apparently, try to sell this as some panacea, and will do that without any negative impact. Which is not true. It has consequences.
In my mind it is not that Siglent manages memory wrong way, but that zoom mode should be implemented to be more user friendly.  I LIKE how zoom mode has DETERMINISTIC dual time bases. They should shuffle windows differently and make them more configurable, but it is a display and user interface  problem, not capture problem.  This is exactly what R&S and Keysight does: it sets timebase (internaly) for long capture, and then shows you only part of captured waveform. It is just U/I that shows you part of waveform, using basically "virtual timebase".

And I get it. I read what you wrote on this topic before. 

By the by, it is you who is proponent of idea that whoever is not agreeing with you is stupid. And this is not first topic you had that condescending tone..
As I said before, you are wonderful engineer with waste experience. I respect that. And I agreed and disagreed with you on many topics. And I will keep doing that at my own conscience.

--- End quote ---

I agree its just a UI problem for the siglent anyway now that i figured out how to use the thing lol

I do like the configurables but they need a couple more and a way to make it work in a more natural mode by just turning the knob out.. if you could hide the preview bar and they put the zoom timebase below with the other timebase data, who could tell the difference?

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod