Author Topic: Why is 1M waveforms/sec so impressive?  (Read 12176 times)

0 Members and 2 Guests are viewing this topic.

Offline jmoleTopic starter

  • Regular Contributor
  • *
  • Posts: 211
  • Country: us
    • My Portfolio
Why is 1M waveforms/sec so impressive?
« on: April 15, 2013, 08:25:50 am »
Don't get me wrong here, I know well the benefits of having a high waveform update rate; but technically, why is something like this so hard to achieve in practice (or so it seems, judging by most of the lower end scopes on the market)?

Ok, take DDR3 for example. 8 GB for what, $50?  12.8GB/s max transfer rate in theory, probably closer to 5 GB/s in practice.

Forget dual channel for a moment, say you've got a 1 GS/s scope, 4 channels, pumping data at 8 bits/sample into some DDR3 memory.

With 2GB of memory/channel, you could capture 2 full seconds of acquisition data at 1 GS/s.  Would it be that hard to just analyze the data afterwards and use a "software trigger" to get an insanely high waveform capture rate? Or am i missing something here?

 

Offline Balaur

  • Supporter
  • ****
  • Posts: 525
  • Country: fr
Re: Why is 1M waveforms/sec so impressive?
« Reply #1 on: April 15, 2013, 08:38:16 am »
Signal integrity and analog acquisition issues.

It's quite a difficult task to bring the signal (as close as possible to the original) to the input(s) of the ADC(s).
Then ADCs with high sampling rates are quite expensive.

 

Offline jmoleTopic starter

  • Regular Contributor
  • *
  • Posts: 211
  • Country: us
    • My Portfolio
Re: Why is 1M waveforms/sec so impressive?
« Reply #2 on: April 15, 2013, 08:41:17 am »
Signal integrity and analog acquisition issues.

It's quite a difficult task to bring the signal (as close as possible to the original) to the input(s) of the ADC(s).
Then ADCs with high sampling rates are quite expensive.

I think you're confounding sampling rate with update rate. I'm talking about waveform update rate, assuming that you already have a nice front-end and ADC to handle the high sampling rates.
 

Offline Balaur

  • Supporter
  • ****
  • Posts: 525
  • Country: fr
Re: Why is 1M waveforms/sec so impressive?
« Reply #3 on: April 15, 2013, 09:26:21 am »
Yes, sorry, I wrote that message faster than I could think.

I'm even more embarrassed about this mistake since I should have known better. I've actually designed both on-chip (i.e. using standard logic cells in an ASIC) sensors for fast (tens of ps) transients and also FPGA-based hardware testers that behind the front-end had a high-speed data analysis part.

While I know nothing about the choices or limitations that the designers of commercial devices faced during the development or validation of their systems, here are a few observations from my experience. Please note that in my field, the notion of waveform/second is not used, as the devices I mention doesn't present waveforms, they present a (data) snapshot whenever the event we are capturing occurs. They are more Logic Analysers-like, although the ASIC sensor reported analog waveforms.

I. You can receive incoming data at pretty high bit rates, that's for sure.

In one of the systems I worked on, there were like 6 x 32 (+4 parity bits) SRAM asynchronous memories, working at around 30 MHz. That makes around 5.76 Gb/sec of gap-less incoming data. In other one, we had a standard 64 bit DDR2 memory module and some glue logic in FPGA (FIFO-based) to allow gap-less/uniform access time function

II. You have to do something with that data, preferably (or I rather say absolutely) in realtime.

For many reasons, in our systems, the trigger was in hardware (on the FPGA or ASIC). That ensured a realtime treatment of the data and the start of the snapshot procedure whenever the trigger occurred. Except the snapshot data, everything else was discarded.

I would guess that doing all this in software could be possible, although taxing. Do you need a high-performance CPU just for this? Do you need a realtime OS to guaranty the timed execution of the tasks? We didn't used a software-based approach because of the fact that the estimations for the required CPU power were too big. Our devices are rather compact, fault tolerant, low power, able to work in vacuum and we use tens of them at a time.

If the realtime criteria is not meet, you will certainly accumulate too much data to store or analyze and you will be forced to discard some of this.

I would hazard to think that a software-based trigger is not feasible always in real-time because of the required computing power?

III. You need to present the measurements to the user at some point.

This is more an extension of the II. really. In my case, the measurements were stored in files on the controlling PC. Since the devices only sent a snapshot of the captured event, the only requirement is to have an event rate low enough to not overwork the PC.

Best regards,
Dan
 

Offline jpb

  • Super Contributor
  • ***
  • Posts: 1771
  • Country: gb
Re: Why is 1M waveforms/sec so impressive?
« Reply #4 on: April 15, 2013, 09:35:17 am »
LeCroy used to (or perhaps still do) take this approach of capturing a lot of data and then using soft-trigger. But having a huge buffer just means you need to either stop the capture and then process or you need to process the data at a high rate anyway. It would also mean that you were looking at data that was 2 seconds old which may matter if you were combining a scope with a logic analyser say with the logic analyser triggering the scope.

To get the very high waveforms per second rate two things need to be done very quickly. One is that a digital trigger is needed rather than analogue in order to process trigger events very quickly. Analogue triggers get time accuracy by using a fast charge slow discharge circuit but this means that time is taken up during the slow discharge. Digital triggers work out the trigger point by interpolation between the samples so can be much faster but take some processing effort to do the interpolation.

The second is translating the data from ADC values and time values to video memory. I would think that once the trigger offset is known the translation of points to screen memory would not take much processing perhaps some integer additions and perhaps a scaling multiply but if the system was well designed perhaps even the multiply wouldn't be needed (there would be a direct mapping between the 256 ADC values and video memory).

Of course an additional complication is providing a gradient of intensities on the screen rather than just having on or off pixels.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #5 on: April 15, 2013, 12:41:55 pm »
Don't get me wrong here, I know well the benefits of having a high waveform update rate; but technically, why is something like this so hard to achieve in practice (or so it seems, judging by most of the lower end scopes on the market)?

Simply because of price. To maintain low price levels, cheap scopes usually use very cheap low-power microcrontrollers which often don't have the necessary internal and external bandwidth and often even lack a suitable dedicated graphics core. In addition, the interface between sample memory and processing is often quite narrow. Of course there are enough powerful microcontrollers out there which do come with a suitable integrated GPU, but they are more expensive and also more complicated to develop for.

The other question is how important very high wfm rates actually are. I would say in many applications where cheaper scopes are used the waveform rate is not a limiting factor.

Quote
With 2GB of memory/channel, you could capture 2 full seconds of acquisition data at 1 GS/s.  Would it be that hard to just analyze the data afterwards and use a "software trigger" to get an insanely high waveform capture rate? Or am i missing something here?

Yes, you do miss something. Memory is not the limiting factor (and for these applications one would probably use something like GDDR5 which depending on the bus width handles >200GBps easily and is cheap like chips), Increase the memory bus width and you can get to really insane data bandwidth. Memory is really not a problem.

At the end of the day it comes down to cost, and how much a higher wfm rate is worth. More expensive scopes (which nowadays usually run Windows and use PC CPUs and GPUs) do provide better wfm rates, so again it's mostly a pricing thing.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #6 on: April 15, 2013, 12:55:15 pm »
LeCroy used to (or perhaps still do) take this approach of capturing a lot of data and then using soft-trigger. But having a huge buffer just means you need to either stop the capture and then process or you need to process the data at a high rate anyway. It would also mean that you were looking at data that was 2 seconds old which may matter if you were combining a scope with a logic analyser say with the logic analyser triggering the scope.

Looking at a certain sample (even if its 2s old) is perfectly fine in many applications. In addition, you can split up memory into segments or set advanced triggers to make sure only those elements that are of interest are actually captured. Or just use scope processing and do the necessary filtering/maths in realtime.

Quote
To get the very high waveforms per second rate two things need to be done very quickly. One is that a digital trigger is needed rather than analogue in order to process trigger events very quickly. Analogue triggers get time accuracy by using a fast charge slow discharge circuit but this means that time is taken up during the slow discharge. Digital triggers work out the trigger point by interpolation between the samples so can be much faster but take some processing effort to do the interpolation.

Yes, but that is not a problem even with older processors, provided the processing architecture is adequate.

Quote
The second is translating the data from ADC values and time values to video memory. I would think that once the trigger offset is known the translation of points to screen memory would not take much processing perhaps some integer additions and perhaps a scaling multiply but if the system was well designed perhaps even the multiply wouldn't be needed (there would be a direct mapping between the 256 ADC values and video memory).

Of course an additional complication is providing a gradient of intensities on the screen rather than just having on or off pixels.

Instead of doing the video stuff by hand it's much easier to leave this to a modern GPU using modern APIs like Direct3D or OpenGL, which is how it's done on most modern midrange and highend scopes running Windows anyways.

And I wouldn't be surprised if Agilent uses a similar approach with their DSO-X2000/3000 Series which apparently uses Embedded Windows.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27006
  • Country: nl
    • NCT Developments
Re: Why is 1M waveforms/sec so impressive?
« Reply #7 on: April 15, 2013, 01:39:32 pm »
Don't get me wrong here, I know well the benefits of having a high waveform update rate; but technically, why is something like this so hard to achieve in practice (or so it seems, judging by most of the lower end scopes on the market)?

Simply because of price. To maintain low price levels, cheap scopes usually use very cheap low-power microcrontrollers which often don't have the necessary internal and external bandwidth and often even lack a suitable dedicated graphics core. In addition, the interface between sample memory and processing is often quite narrow. Of course there are enough powerful microcontrollers out there which do come with a suitable integrated GPU, but they are more expensive and also more complicated to develop for.
GPU power doesn't matter. A human can deal with only a few updates per second at most. What you need is hardware which can combine several sweeps into one trace. Getting the triggers properly aligned is one of the things which makes this difficult.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online EEVblog

  • Administrator
  • *****
  • Posts: 37787
  • Country: au
    • EEVblog
Re: Why is 1M waveforms/sec so impressive?
« Reply #8 on: April 15, 2013, 03:12:53 pm »
 

Online EEVblog

  • Administrator
  • *****
  • Posts: 37787
  • Country: au
    • EEVblog
Re: Why is 1M waveforms/sec so impressive?
« Reply #9 on: April 15, 2013, 03:14:24 pm »
GPU power doesn't matter. A human can deal with only a few updates per second at most. What you need is hardware which can combine several sweeps into one trace. Getting the triggers properly aligned is one of the things which makes this difficult.

And then you get into the realm of DPO/Graduated intensity displays and how that data is built up in memory and displayed in real time etc.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #10 on: April 15, 2013, 03:30:15 pm »
GPU power doesn't matter.

It does, as it affects the waveform update rate.

Quote
A human can deal with only a few updates per second at most.

It's not about visual screen updates. You ignore that if the GPU is not fast enough (or there is none) that you end up loosing information between the rendering phases.

Quote
What you need is hardware which can combine several sweeps into one trace. Getting the triggers properly aligned is one of the things which makes this difficult.

That's not difficult at all and nowadays can be done with very moderate processing.
« Last Edit: April 15, 2013, 03:32:27 pm by Wuerstchenhund »
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #11 on: April 15, 2013, 03:48:52 pm »
The Agilent uses the Megazoom ASIC, direct to screen.

Yes, they use their own ASIC. But that doesn't mean Agilent hasn't build up their own GPU technology (which for a scope could me much simpler than for many other embedded applications anyways), or licensed any of the available GPU IP cores from other parties.
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Why is 1M waveforms/sec so impressive?
« Reply #12 on: April 15, 2013, 04:02:52 pm »
A human can deal with only a few updates per second at most.

No, this is wrong.  Tests with Air Force pilots have shown that they could identify a plane on a picture that was flashed for as little as 1/220th of a second - so there is evidence to support the theory that humans can identify discrete pieces of information in something close to ~1/250th of a second.
 

Offline jmoleTopic starter

  • Regular Contributor
  • *
  • Posts: 211
  • Country: us
    • My Portfolio
Re: Why is 1M waveforms/sec so impressive?
« Reply #13 on: April 15, 2013, 04:07:14 pm »
GPU power doesn't matter. A human can deal with only a few updates per second at most. What you need is hardware which can combine several sweeps into one trace. Getting the triggers properly aligned is one of the things which makes this difficult.

And then you get into the realm of DPO/Graduated intensity displays and how that data is built up in memory and displayed in real time etc.

It seems to me that intensity grading can be done real-time, just by using a rolling average / digital low-pass filter on the display data. You don't necessarily need to store the last "n" waveform captures, you just update the average for each display pixel on each pass, same as a CRT scope does.

I mean the intensity of a given point on a CRT is directly proportional to how often it's swept past that point on the screen. To me, this doesn't seem like the hard part. You could use an FPGA to do the averaging in realtime, and act as a framebuffer for the device doing the screen drawing.

Your point about the limits of human vision is well received though, but then again, that's why intensity graded displays are so essential nowadays.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #14 on: April 15, 2013, 04:41:12 pm »
It seems to me that intensity grading can be done real-time, just by using a rolling average / digital low-pass filter on the display data. You don't necessarily need to store the last "n" waveform captures, you just update the average for each display pixel on each pass, same as a CRT scope does.

Yes, intensity grading is quite simple. It gets a little bit more complicated when we get to color grading, though, as this means maintaining a histogram, but again this is nothing that can't be done at high update rates with modern processors.

Quote
Your point about the limits of human vision is well received though, but then again, that's why intensity graded displays are so essential nowadays.

But as marmad correctly said this his point was wrong. The human information capability is not a static figure, but depends on various factors (for example, the visual perception rate in peripheral vision is much higher than in the center area, which is what is used by experienced pilots to recognize other aircraft in their vicinity). The human brain can be surprisingly capable at times.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27006
  • Country: nl
    • NCT Developments
Re: Why is 1M waveforms/sec so impressive?
« Reply #15 on: April 15, 2013, 04:44:47 pm »
GPU power doesn't matter. A human can deal with only a few updates per second at most. What you need is hardware which can combine several sweeps into one trace. Getting the triggers properly aligned is one of the things which makes this difficult.

And then you get into the realm of DPO/Graduated intensity displays and how that data is built up in memory and displayed in real time etc.
Its still dead slow... How long does it take to blink your eyes? The information should be at least longer on the display than that. Back in the old days when scopes had 16 bit CPU's at 16MHz the CPU and the memory system would be the bottleneck but nowadays thats hardly an issue.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27006
  • Country: nl
    • NCT Developments
Re: Why is 1M waveforms/sec so impressive?
« Reply #16 on: April 15, 2013, 04:52:36 pm »
GPU power doesn't matter.

It does, as it affects the waveform update rate.

Quote
A human can deal with only a few updates per second at most.

It's not about visual screen updates. You ignore that if the GPU is not fast enough (or there is none) that you end up loosing information between the rendering phases.
Sorry but that is not how a scope works. The GPU should only be bothered with displaying an image. The trace information information (whether from one sweep or multiple sweeps combined) should be composed in an earlier step. The GPU only deals with drawing. The update rate of a display is somewhere between 30 and 60Hz. There is no need to go any faster because TFT screens have a response time of several ms.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2095
Re: Why is 1M waveforms/sec so impressive?
« Reply #17 on: April 15, 2013, 04:55:36 pm »
With 2GB of memory/channel, you could capture 2 full seconds of acquisition data at 1 GS/s.  Would it be that hard to just analyze the data afterwards and use a "software trigger" to get an insanely high waveform capture rate? Or am i missing something here?

No it wouldn't be hard but it only produces insanely high waveform update rates if you can do it insanely quickly.

1M waveforms/second requires you to identify the trigger point, and merge display width samples  surrounding it into a graduated intensity display buffer in 1us. It also requires you to be capturing new data at the same time.

If the display is 1024 samples wide that is 1GB/s going into sample memory, 1GB/s coming out of sample memory, 1GB/s coming out of the display buffer, mathematical operation and 1GB/s going back to the display buffer. And that is with on the fly trigger point detection and no consideration for 'fading' the whole display buffer or sucking a bit of data out for the actual display. Also consider the display buffer access is random not sequential. So yes it is impressive.
« Last Edit: April 15, 2013, 05:00:44 pm by Rufus »
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Why is 1M waveforms/sec so impressive?
« Reply #18 on: April 16, 2013, 11:33:18 am »
Its still dead slow... How long does it take to blink your eyes? The information should be at least longer on the display than that. Back in the old days when scopes had 16 bit CPU's at 16MHz the CPU and the memory system would be the bottleneck but nowadays thats hardly an issue.

Huh? What does blinking your eyes have to do with it? It's irrelevant at the level of events happening per second - I stare at my DSO screen for several seconds in a row without blinking. As mentioned already, tests have shown pilots identifying planes (i.e. making a distinction between a non-airplane shape and an airplane shape) in hundredths of seconds. Since one of the reasons for fast update rate DSOs is to 'notice' fast glitches, what do you suppose our speed of cognition might be to simply notice a trace in the wrong place for a fraction of a second?
 

Online EEVblog

  • Administrator
  • *****
  • Posts: 37787
  • Country: au
    • EEVblog
Re: Why is 1M waveforms/sec so impressive?
« Reply #19 on: April 16, 2013, 12:04:20 pm »
Your point about the limits of human vision is well received though, but then again, that's why intensity graded displays are so essential nowadays.

If you turn on infinite intensity mode (as you likely would to detect glitches etc) then you can walk away for a week and come back and your glitch will still be there displayed.
 

Online EEVblog

  • Administrator
  • *****
  • Posts: 37787
  • Country: au
    • EEVblog
Re: Why is 1M waveforms/sec so impressive?
« Reply #20 on: April 16, 2013, 12:08:10 pm »
Yes, they use their own ASIC. But that doesn't mean Agilent hasn't build up their own GPU technology (which for a scope could me much simpler than for many other embedded applications anyways), or licensed any of the available GPU IP cores from other parties.

Almost by definition, that is a GPU inside the Agilent ASIC, it's essentially a Graphic Processing Unit. What's your point?
I think it's very unlikely they have used an off-the-shelf GPU architecture, it would be purpose written verilog/VHDL to do the specific task and nothing more.
 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2095
Re: Why is 1M waveforms/sec so impressive?
« Reply #21 on: April 16, 2013, 12:17:04 pm »
As mentioned already, tests have shown pilots identifying planes (i.e. making a distinction between a non-airplane shape and an airplane shape) in hundredths of seconds.

And I can make a distinction between an LED that flashes for 1 millionth of a second and one that doesn't flash at all. That doesn't mean I can process 1 million images a second. 

The eye is an integrator and that integration is what lets me see a 1us flash long enough to perceive it. That integration limits the eye's bandwidth to a few tens of Hz and nature is smart enough not to create a brain which can process information faster than its sensors can acquire it.
 

Offline KedasProbe

  • Frequent Contributor
  • **
  • Posts: 646
  • Country: be
Re: Why is 1M waveforms/sec so impressive?
« Reply #22 on: April 16, 2013, 12:22:05 pm »
We known that the scope is fast enough to capture the data (2GS/s is really put into memory)
The obvious problem is processing the data.
60Hz screen means every 1/60 sec you want a complete new update on the screen of everything that happened in that 1/60 sec moment. or 2GS/60 = 33MS or MB that is the amount you would need to process in 1/60 sec, obviously that is a lot. Edit: or process speed 2GB/sec for the CPU.

Skipping time is what is done to keep up.
But it also means that when your sample rate drops that you have less samples to process hence you may not have to skip any time. So higher waveform rate means faster/bigger processor so you will have less skipped time at higher frequencies. (but likely also means hotter, louder fan)
« Last Edit: April 16, 2013, 01:26:41 pm by KedasProbe »
Not everything that counts can be measured. Not everything that can be measured counts.
[W. Bruce Cameron]
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Why is 1M waveforms/sec so impressive?
« Reply #23 on: April 16, 2013, 12:23:39 pm »
And I can make a distinction between an LED that flashes for 1 millionth of a second and one that doesn't flash at all. That doesn't mean I can process 1 million images a second. 

Who said it could, would, or should? Go back and re-read the rest of my post since you seemed to have missed the point.

That integration limits the eye's bandwidth to a few tens of Hz and nature is smart enough not to create a brain which can process information faster than its sensors can acquire it.

So the research involving pilots - where they could 'process the information' that another plane was nearby in hundreds of Hz - is incorrect?
« Last Edit: April 16, 2013, 12:34:54 pm by marmad »
 

Offline Gunb

  • Regular Contributor
  • *
  • Posts: 221
  • Country: de
Re: Why is 1M waveforms/sec so impressive?
« Reply #24 on: April 16, 2013, 01:05:16 pm »
Your point about the limits of human vision is well received though, but then again, that's why intensity graded displays are so essential nowadays.

If you turn on infinite intensity mode (as you likely would to detect glitches etc) then you can walk away for a week and come back and your glitch will still be there displayed.

Exactly, most important remark of this thread.

The likelyhood to catch a rare glitch is independent from the screen refresh rate, it depends on the waveform capture rate. Once catched and stored in screen buffer and intensity set to infinite, it will appear on screen with 60Hz.

Even with a 25Hz screen and same wfm/s the likelyhood to get the glitch would be the same, only screen flicker would be higher.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf