Hi folks - this has long been something that's really bothered me. Why is it when I change the vertical scaling even one division, I often run into a situation where I have to adjust my trigger to reliably trigger again?
Happened today, using a DSOX4034A for an inrush measurement. I was in trigger normal mode (so I didn't have to repeatedly arm), powering on and off the DUT to capture the current probe signal. As I'm honing in on the capture settings, I'm increasing vertical scale (higher V/div ie 1V/div to 5V/div), powering on, adjusting, powering on, adjusting timebase, delay, holdoff, etc, rinse and repeat. Well after I reach a certain point when I increase the vertical scale, I stop triggering. Drop back and I can trigger again. Say I'm measuring a peak inrush of 40A, my trigger was set to rising at 2A level. Now, just for example, I have to raise my trigger level to 10V to trigger again. I didn't honestly remember if I was increasing V/div or decreasing but you probably get the idea.
My suspicion is what I'm seeing in screen is essentially used to "software" trigger? And so with more on-screen vertical dynamic range with increasing V/division, I need to pull my trigger level out of the weeds so to speak?
Is there absolutely no analog trigger circuitry in modern storage oscilloscopes? Why is trigger ability dependant on vertical scaling?
I guess maybe I'm hung up on how an analog scope trigger worked, you set a level and that was that.
This has been something I've struggled to explain eloquently so would love to hear your thoughts.