Author Topic: Why is 1M waveforms/sec so impressive?  (Read 12175 times)

0 Members and 1 Guest are viewing this topic.

Offline jmoleTopic starter

  • Regular Contributor
  • *
  • Posts: 211
  • Country: us
    • My Portfolio
Why is 1M waveforms/sec so impressive?
« on: April 15, 2013, 08:25:50 am »
Don't get me wrong here, I know well the benefits of having a high waveform update rate; but technically, why is something like this so hard to achieve in practice (or so it seems, judging by most of the lower end scopes on the market)?

Ok, take DDR3 for example. 8 GB for what, $50?  12.8GB/s max transfer rate in theory, probably closer to 5 GB/s in practice.

Forget dual channel for a moment, say you've got a 1 GS/s scope, 4 channels, pumping data at 8 bits/sample into some DDR3 memory.

With 2GB of memory/channel, you could capture 2 full seconds of acquisition data at 1 GS/s.  Would it be that hard to just analyze the data afterwards and use a "software trigger" to get an insanely high waveform capture rate? Or am i missing something here?

 

Offline Balaur

  • Supporter
  • ****
  • Posts: 525
  • Country: fr
Re: Why is 1M waveforms/sec so impressive?
« Reply #1 on: April 15, 2013, 08:38:16 am »
Signal integrity and analog acquisition issues.

It's quite a difficult task to bring the signal (as close as possible to the original) to the input(s) of the ADC(s).
Then ADCs with high sampling rates are quite expensive.

 

Offline jmoleTopic starter

  • Regular Contributor
  • *
  • Posts: 211
  • Country: us
    • My Portfolio
Re: Why is 1M waveforms/sec so impressive?
« Reply #2 on: April 15, 2013, 08:41:17 am »
Signal integrity and analog acquisition issues.

It's quite a difficult task to bring the signal (as close as possible to the original) to the input(s) of the ADC(s).
Then ADCs with high sampling rates are quite expensive.

I think you're confounding sampling rate with update rate. I'm talking about waveform update rate, assuming that you already have a nice front-end and ADC to handle the high sampling rates.
 

Offline Balaur

  • Supporter
  • ****
  • Posts: 525
  • Country: fr
Re: Why is 1M waveforms/sec so impressive?
« Reply #3 on: April 15, 2013, 09:26:21 am »
Yes, sorry, I wrote that message faster than I could think.

I'm even more embarrassed about this mistake since I should have known better. I've actually designed both on-chip (i.e. using standard logic cells in an ASIC) sensors for fast (tens of ps) transients and also FPGA-based hardware testers that behind the front-end had a high-speed data analysis part.

While I know nothing about the choices or limitations that the designers of commercial devices faced during the development or validation of their systems, here are a few observations from my experience. Please note that in my field, the notion of waveform/second is not used, as the devices I mention doesn't present waveforms, they present a (data) snapshot whenever the event we are capturing occurs. They are more Logic Analysers-like, although the ASIC sensor reported analog waveforms.

I. You can receive incoming data at pretty high bit rates, that's for sure.

In one of the systems I worked on, there were like 6 x 32 (+4 parity bits) SRAM asynchronous memories, working at around 30 MHz. That makes around 5.76 Gb/sec of gap-less incoming data. In other one, we had a standard 64 bit DDR2 memory module and some glue logic in FPGA (FIFO-based) to allow gap-less/uniform access time function

II. You have to do something with that data, preferably (or I rather say absolutely) in realtime.

For many reasons, in our systems, the trigger was in hardware (on the FPGA or ASIC). That ensured a realtime treatment of the data and the start of the snapshot procedure whenever the trigger occurred. Except the snapshot data, everything else was discarded.

I would guess that doing all this in software could be possible, although taxing. Do you need a high-performance CPU just for this? Do you need a realtime OS to guaranty the timed execution of the tasks? We didn't used a software-based approach because of the fact that the estimations for the required CPU power were too big. Our devices are rather compact, fault tolerant, low power, able to work in vacuum and we use tens of them at a time.

If the realtime criteria is not meet, you will certainly accumulate too much data to store or analyze and you will be forced to discard some of this.

I would hazard to think that a software-based trigger is not feasible always in real-time because of the required computing power?

III. You need to present the measurements to the user at some point.

This is more an extension of the II. really. In my case, the measurements were stored in files on the controlling PC. Since the devices only sent a snapshot of the captured event, the only requirement is to have an event rate low enough to not overwork the PC.

Best regards,
Dan
 

Offline jpb

  • Super Contributor
  • ***
  • Posts: 1771
  • Country: gb
Re: Why is 1M waveforms/sec so impressive?
« Reply #4 on: April 15, 2013, 09:35:17 am »
LeCroy used to (or perhaps still do) take this approach of capturing a lot of data and then using soft-trigger. But having a huge buffer just means you need to either stop the capture and then process or you need to process the data at a high rate anyway. It would also mean that you were looking at data that was 2 seconds old which may matter if you were combining a scope with a logic analyser say with the logic analyser triggering the scope.

To get the very high waveforms per second rate two things need to be done very quickly. One is that a digital trigger is needed rather than analogue in order to process trigger events very quickly. Analogue triggers get time accuracy by using a fast charge slow discharge circuit but this means that time is taken up during the slow discharge. Digital triggers work out the trigger point by interpolation between the samples so can be much faster but take some processing effort to do the interpolation.

The second is translating the data from ADC values and time values to video memory. I would think that once the trigger offset is known the translation of points to screen memory would not take much processing perhaps some integer additions and perhaps a scaling multiply but if the system was well designed perhaps even the multiply wouldn't be needed (there would be a direct mapping between the 256 ADC values and video memory).

Of course an additional complication is providing a gradient of intensities on the screen rather than just having on or off pixels.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #5 on: April 15, 2013, 12:41:55 pm »
Don't get me wrong here, I know well the benefits of having a high waveform update rate; but technically, why is something like this so hard to achieve in practice (or so it seems, judging by most of the lower end scopes on the market)?

Simply because of price. To maintain low price levels, cheap scopes usually use very cheap low-power microcrontrollers which often don't have the necessary internal and external bandwidth and often even lack a suitable dedicated graphics core. In addition, the interface between sample memory and processing is often quite narrow. Of course there are enough powerful microcontrollers out there which do come with a suitable integrated GPU, but they are more expensive and also more complicated to develop for.

The other question is how important very high wfm rates actually are. I would say in many applications where cheaper scopes are used the waveform rate is not a limiting factor.

Quote
With 2GB of memory/channel, you could capture 2 full seconds of acquisition data at 1 GS/s.  Would it be that hard to just analyze the data afterwards and use a "software trigger" to get an insanely high waveform capture rate? Or am i missing something here?

Yes, you do miss something. Memory is not the limiting factor (and for these applications one would probably use something like GDDR5 which depending on the bus width handles >200GBps easily and is cheap like chips), Increase the memory bus width and you can get to really insane data bandwidth. Memory is really not a problem.

At the end of the day it comes down to cost, and how much a higher wfm rate is worth. More expensive scopes (which nowadays usually run Windows and use PC CPUs and GPUs) do provide better wfm rates, so again it's mostly a pricing thing.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #6 on: April 15, 2013, 12:55:15 pm »
LeCroy used to (or perhaps still do) take this approach of capturing a lot of data and then using soft-trigger. But having a huge buffer just means you need to either stop the capture and then process or you need to process the data at a high rate anyway. It would also mean that you were looking at data that was 2 seconds old which may matter if you were combining a scope with a logic analyser say with the logic analyser triggering the scope.

Looking at a certain sample (even if its 2s old) is perfectly fine in many applications. In addition, you can split up memory into segments or set advanced triggers to make sure only those elements that are of interest are actually captured. Or just use scope processing and do the necessary filtering/maths in realtime.

Quote
To get the very high waveforms per second rate two things need to be done very quickly. One is that a digital trigger is needed rather than analogue in order to process trigger events very quickly. Analogue triggers get time accuracy by using a fast charge slow discharge circuit but this means that time is taken up during the slow discharge. Digital triggers work out the trigger point by interpolation between the samples so can be much faster but take some processing effort to do the interpolation.

Yes, but that is not a problem even with older processors, provided the processing architecture is adequate.

Quote
The second is translating the data from ADC values and time values to video memory. I would think that once the trigger offset is known the translation of points to screen memory would not take much processing perhaps some integer additions and perhaps a scaling multiply but if the system was well designed perhaps even the multiply wouldn't be needed (there would be a direct mapping between the 256 ADC values and video memory).

Of course an additional complication is providing a gradient of intensities on the screen rather than just having on or off pixels.

Instead of doing the video stuff by hand it's much easier to leave this to a modern GPU using modern APIs like Direct3D or OpenGL, which is how it's done on most modern midrange and highend scopes running Windows anyways.

And I wouldn't be surprised if Agilent uses a similar approach with their DSO-X2000/3000 Series which apparently uses Embedded Windows.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27006
  • Country: nl
    • NCT Developments
Re: Why is 1M waveforms/sec so impressive?
« Reply #7 on: April 15, 2013, 01:39:32 pm »
Don't get me wrong here, I know well the benefits of having a high waveform update rate; but technically, why is something like this so hard to achieve in practice (or so it seems, judging by most of the lower end scopes on the market)?

Simply because of price. To maintain low price levels, cheap scopes usually use very cheap low-power microcrontrollers which often don't have the necessary internal and external bandwidth and often even lack a suitable dedicated graphics core. In addition, the interface between sample memory and processing is often quite narrow. Of course there are enough powerful microcontrollers out there which do come with a suitable integrated GPU, but they are more expensive and also more complicated to develop for.
GPU power doesn't matter. A human can deal with only a few updates per second at most. What you need is hardware which can combine several sweeps into one trace. Getting the triggers properly aligned is one of the things which makes this difficult.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online EEVblog

  • Administrator
  • *****
  • Posts: 37787
  • Country: au
    • EEVblog
Re: Why is 1M waveforms/sec so impressive?
« Reply #8 on: April 15, 2013, 03:12:53 pm »
 

Online EEVblog

  • Administrator
  • *****
  • Posts: 37787
  • Country: au
    • EEVblog
Re: Why is 1M waveforms/sec so impressive?
« Reply #9 on: April 15, 2013, 03:14:24 pm »
GPU power doesn't matter. A human can deal with only a few updates per second at most. What you need is hardware which can combine several sweeps into one trace. Getting the triggers properly aligned is one of the things which makes this difficult.

And then you get into the realm of DPO/Graduated intensity displays and how that data is built up in memory and displayed in real time etc.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #10 on: April 15, 2013, 03:30:15 pm »
GPU power doesn't matter.

It does, as it affects the waveform update rate.

Quote
A human can deal with only a few updates per second at most.

It's not about visual screen updates. You ignore that if the GPU is not fast enough (or there is none) that you end up loosing information between the rendering phases.

Quote
What you need is hardware which can combine several sweeps into one trace. Getting the triggers properly aligned is one of the things which makes this difficult.

That's not difficult at all and nowadays can be done with very moderate processing.
« Last Edit: April 15, 2013, 03:32:27 pm by Wuerstchenhund »
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #11 on: April 15, 2013, 03:48:52 pm »
The Agilent uses the Megazoom ASIC, direct to screen.

Yes, they use their own ASIC. But that doesn't mean Agilent hasn't build up their own GPU technology (which for a scope could me much simpler than for many other embedded applications anyways), or licensed any of the available GPU IP cores from other parties.
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Why is 1M waveforms/sec so impressive?
« Reply #12 on: April 15, 2013, 04:02:52 pm »
A human can deal with only a few updates per second at most.

No, this is wrong.  Tests with Air Force pilots have shown that they could identify a plane on a picture that was flashed for as little as 1/220th of a second - so there is evidence to support the theory that humans can identify discrete pieces of information in something close to ~1/250th of a second.
 

Offline jmoleTopic starter

  • Regular Contributor
  • *
  • Posts: 211
  • Country: us
    • My Portfolio
Re: Why is 1M waveforms/sec so impressive?
« Reply #13 on: April 15, 2013, 04:07:14 pm »
GPU power doesn't matter. A human can deal with only a few updates per second at most. What you need is hardware which can combine several sweeps into one trace. Getting the triggers properly aligned is one of the things which makes this difficult.

And then you get into the realm of DPO/Graduated intensity displays and how that data is built up in memory and displayed in real time etc.

It seems to me that intensity grading can be done real-time, just by using a rolling average / digital low-pass filter on the display data. You don't necessarily need to store the last "n" waveform captures, you just update the average for each display pixel on each pass, same as a CRT scope does.

I mean the intensity of a given point on a CRT is directly proportional to how often it's swept past that point on the screen. To me, this doesn't seem like the hard part. You could use an FPGA to do the averaging in realtime, and act as a framebuffer for the device doing the screen drawing.

Your point about the limits of human vision is well received though, but then again, that's why intensity graded displays are so essential nowadays.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #14 on: April 15, 2013, 04:41:12 pm »
It seems to me that intensity grading can be done real-time, just by using a rolling average / digital low-pass filter on the display data. You don't necessarily need to store the last "n" waveform captures, you just update the average for each display pixel on each pass, same as a CRT scope does.

Yes, intensity grading is quite simple. It gets a little bit more complicated when we get to color grading, though, as this means maintaining a histogram, but again this is nothing that can't be done at high update rates with modern processors.

Quote
Your point about the limits of human vision is well received though, but then again, that's why intensity graded displays are so essential nowadays.

But as marmad correctly said this his point was wrong. The human information capability is not a static figure, but depends on various factors (for example, the visual perception rate in peripheral vision is much higher than in the center area, which is what is used by experienced pilots to recognize other aircraft in their vicinity). The human brain can be surprisingly capable at times.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27006
  • Country: nl
    • NCT Developments
Re: Why is 1M waveforms/sec so impressive?
« Reply #15 on: April 15, 2013, 04:44:47 pm »
GPU power doesn't matter. A human can deal with only a few updates per second at most. What you need is hardware which can combine several sweeps into one trace. Getting the triggers properly aligned is one of the things which makes this difficult.

And then you get into the realm of DPO/Graduated intensity displays and how that data is built up in memory and displayed in real time etc.
Its still dead slow... How long does it take to blink your eyes? The information should be at least longer on the display than that. Back in the old days when scopes had 16 bit CPU's at 16MHz the CPU and the memory system would be the bottleneck but nowadays thats hardly an issue.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 27006
  • Country: nl
    • NCT Developments
Re: Why is 1M waveforms/sec so impressive?
« Reply #16 on: April 15, 2013, 04:52:36 pm »
GPU power doesn't matter.

It does, as it affects the waveform update rate.

Quote
A human can deal with only a few updates per second at most.

It's not about visual screen updates. You ignore that if the GPU is not fast enough (or there is none) that you end up loosing information between the rendering phases.
Sorry but that is not how a scope works. The GPU should only be bothered with displaying an image. The trace information information (whether from one sweep or multiple sweeps combined) should be composed in an earlier step. The GPU only deals with drawing. The update rate of a display is somewhere between 30 and 60Hz. There is no need to go any faster because TFT screens have a response time of several ms.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2095
Re: Why is 1M waveforms/sec so impressive?
« Reply #17 on: April 15, 2013, 04:55:36 pm »
With 2GB of memory/channel, you could capture 2 full seconds of acquisition data at 1 GS/s.  Would it be that hard to just analyze the data afterwards and use a "software trigger" to get an insanely high waveform capture rate? Or am i missing something here?

No it wouldn't be hard but it only produces insanely high waveform update rates if you can do it insanely quickly.

1M waveforms/second requires you to identify the trigger point, and merge display width samples  surrounding it into a graduated intensity display buffer in 1us. It also requires you to be capturing new data at the same time.

If the display is 1024 samples wide that is 1GB/s going into sample memory, 1GB/s coming out of sample memory, 1GB/s coming out of the display buffer, mathematical operation and 1GB/s going back to the display buffer. And that is with on the fly trigger point detection and no consideration for 'fading' the whole display buffer or sucking a bit of data out for the actual display. Also consider the display buffer access is random not sequential. So yes it is impressive.
« Last Edit: April 15, 2013, 05:00:44 pm by Rufus »
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Why is 1M waveforms/sec so impressive?
« Reply #18 on: April 16, 2013, 11:33:18 am »
Its still dead slow... How long does it take to blink your eyes? The information should be at least longer on the display than that. Back in the old days when scopes had 16 bit CPU's at 16MHz the CPU and the memory system would be the bottleneck but nowadays thats hardly an issue.

Huh? What does blinking your eyes have to do with it? It's irrelevant at the level of events happening per second - I stare at my DSO screen for several seconds in a row without blinking. As mentioned already, tests have shown pilots identifying planes (i.e. making a distinction between a non-airplane shape and an airplane shape) in hundredths of seconds. Since one of the reasons for fast update rate DSOs is to 'notice' fast glitches, what do you suppose our speed of cognition might be to simply notice a trace in the wrong place for a fraction of a second?
 

Online EEVblog

  • Administrator
  • *****
  • Posts: 37787
  • Country: au
    • EEVblog
Re: Why is 1M waveforms/sec so impressive?
« Reply #19 on: April 16, 2013, 12:04:20 pm »
Your point about the limits of human vision is well received though, but then again, that's why intensity graded displays are so essential nowadays.

If you turn on infinite intensity mode (as you likely would to detect glitches etc) then you can walk away for a week and come back and your glitch will still be there displayed.
 

Online EEVblog

  • Administrator
  • *****
  • Posts: 37787
  • Country: au
    • EEVblog
Re: Why is 1M waveforms/sec so impressive?
« Reply #20 on: April 16, 2013, 12:08:10 pm »
Yes, they use their own ASIC. But that doesn't mean Agilent hasn't build up their own GPU technology (which for a scope could me much simpler than for many other embedded applications anyways), or licensed any of the available GPU IP cores from other parties.

Almost by definition, that is a GPU inside the Agilent ASIC, it's essentially a Graphic Processing Unit. What's your point?
I think it's very unlikely they have used an off-the-shelf GPU architecture, it would be purpose written verilog/VHDL to do the specific task and nothing more.
 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2095
Re: Why is 1M waveforms/sec so impressive?
« Reply #21 on: April 16, 2013, 12:17:04 pm »
As mentioned already, tests have shown pilots identifying planes (i.e. making a distinction between a non-airplane shape and an airplane shape) in hundredths of seconds.

And I can make a distinction between an LED that flashes for 1 millionth of a second and one that doesn't flash at all. That doesn't mean I can process 1 million images a second. 

The eye is an integrator and that integration is what lets me see a 1us flash long enough to perceive it. That integration limits the eye's bandwidth to a few tens of Hz and nature is smart enough not to create a brain which can process information faster than its sensors can acquire it.
 

Offline KedasProbe

  • Frequent Contributor
  • **
  • Posts: 646
  • Country: be
Re: Why is 1M waveforms/sec so impressive?
« Reply #22 on: April 16, 2013, 12:22:05 pm »
We known that the scope is fast enough to capture the data (2GS/s is really put into memory)
The obvious problem is processing the data.
60Hz screen means every 1/60 sec you want a complete new update on the screen of everything that happened in that 1/60 sec moment. or 2GS/60 = 33MS or MB that is the amount you would need to process in 1/60 sec, obviously that is a lot. Edit: or process speed 2GB/sec for the CPU.

Skipping time is what is done to keep up.
But it also means that when your sample rate drops that you have less samples to process hence you may not have to skip any time. So higher waveform rate means faster/bigger processor so you will have less skipped time at higher frequencies. (but likely also means hotter, louder fan)
« Last Edit: April 16, 2013, 01:26:41 pm by KedasProbe »
Not everything that counts can be measured. Not everything that can be measured counts.
[W. Bruce Cameron]
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Why is 1M waveforms/sec so impressive?
« Reply #23 on: April 16, 2013, 12:23:39 pm »
And I can make a distinction between an LED that flashes for 1 millionth of a second and one that doesn't flash at all. That doesn't mean I can process 1 million images a second. 

Who said it could, would, or should? Go back and re-read the rest of my post since you seemed to have missed the point.

That integration limits the eye's bandwidth to a few tens of Hz and nature is smart enough not to create a brain which can process information faster than its sensors can acquire it.

So the research involving pilots - where they could 'process the information' that another plane was nearby in hundreds of Hz - is incorrect?
« Last Edit: April 16, 2013, 12:34:54 pm by marmad »
 

Offline Gunb

  • Regular Contributor
  • *
  • Posts: 221
  • Country: de
Re: Why is 1M waveforms/sec so impressive?
« Reply #24 on: April 16, 2013, 01:05:16 pm »
Your point about the limits of human vision is well received though, but then again, that's why intensity graded displays are so essential nowadays.

If you turn on infinite intensity mode (as you likely would to detect glitches etc) then you can walk away for a week and come back and your glitch will still be there displayed.

Exactly, most important remark of this thread.

The likelyhood to catch a rare glitch is independent from the screen refresh rate, it depends on the waveform capture rate. Once catched and stored in screen buffer and intensity set to infinite, it will appear on screen with 60Hz.

Even with a 25Hz screen and same wfm/s the likelyhood to get the glitch would be the same, only screen flicker would be higher.
 

Online EEVblog

  • Administrator
  • *****
  • Posts: 37787
  • Country: au
    • EEVblog
Re: Why is 1M waveforms/sec so impressive?
« Reply #25 on: April 16, 2013, 01:36:29 pm »
The likelyhood to catch a rare glitch is independent from the screen refresh rate, it depends on the waveform capture rate. Once catched and stored in screen buffer and intensity set to infinite, it will appear on screen with 60Hz.

Yes, the screen refresh rate is a red herring, it's got nothing really to do with the O/P's question. To answer the O/P question, 1M waveform/sec is impressive because it allows you to potentially capture a random glitch, on average 1000 times faster than a scope with only 1K waveform/sec.
http://cp.literature.agilent.com/litweb/pdf/5989-7885EN.pdf
I have demoed this in a video.
 

Offline Wuerstchenhund

  • Super Contributor
  • ***
  • Posts: 3088
  • Country: gb
  • Able to drop by occasionally only
Re: Why is 1M waveforms/sec so impressive?
« Reply #26 on: April 16, 2013, 02:02:07 pm »
Almost by definition, that is a GPU inside the Agilent ASIC, it's essentially a Graphic Processing Unit. What's your point?

The point is that you can either have a GPU or do all the calculations in software in a general purpose CPU which is slower. GPUs can help with many of the calculations in a modern scope, but come at a price.

Quote
I think it's very unlikely they have used an off-the-shelf GPU architecture, it would be purpose written verilog/VHDL to do the specific task and nothing more.

Not necessarily, as using a common GPU architecture (even if it's a relative simple one) would bring certain advantage in terms of functional expandability. It can also bring cost benefits in not having to re-invent the wheel.

But at the end of the day we will never know as I'm sure Agilent won't tell us.
 

Offline KedasProbe

  • Frequent Contributor
  • **
  • Posts: 646
  • Country: be
Re: Why is 1M waveforms/sec so impressive?
« Reply #27 on: April 16, 2013, 02:10:14 pm »
I do wonder if it's not more useful to provide the max sample speed without skipped time of a scope.
Then if you set your scope to sample at twice the speed you know you are missing at least half of it.

edit: Showing % dead time would be nice on a scope screen. (although I doubt they will do that)
« Last Edit: April 16, 2013, 02:29:22 pm by KedasProbe »
Not everything that counts can be measured. Not everything that can be measured counts.
[W. Bruce Cameron]
 

Online EEVblog

  • Administrator
  • *****
  • Posts: 37787
  • Country: au
    • EEVblog
Re: Why is 1M waveforms/sec so impressive?
« Reply #28 on: April 16, 2013, 02:21:41 pm »
Not necessarily, as using a common GPU architecture (even if it's a relative simple one) would bring certain advantage in terms of functional expandability. It can also bring cost benefits in not having to re-invent the wheel.
But at the end of the day we will never know as I'm sure Agilent won't tell us.

This is the 4th generation MegaZoom ASIC, so would almost certainly build upon technology in previous generations.
http://www.hit.bme.hu/~papay/edu/Lab/MegaZoom.pdf
Previous versions went through the CPU to the display, but the latest one doesn't it uses what Agilent call the "plotter"


It's almost certainly custom instead of some off-the-shelf GPU core. Its is also the reason why the Agilent can't resize the display window etc, it's a fixed size and location in the custom display logic. That crippled the 4000 series too that used the bigger screen but the same ASIC chip.
 

Offline Gunb

  • Regular Contributor
  • *
  • Posts: 221
  • Country: de
Re: Why is 1M waveforms/sec so impressive?
« Reply #29 on: April 16, 2013, 02:34:28 pm »
I do wonder if it's not more useful to provide the max sample speed without skipped time of a scope.
Then if you set your scope to sample at twice the speed you know you are missing at least half of it.

One thing is independent from the other. Using an ADC with twice of the speed only takes twice of the samples than before - nothing else. Assumed the timebase is the same you've got only higher horizontal (at least graphical) resolution of the same waveform, but the dead-time between subsequent waveforms remains the same - nothing gained.

wfm/s cannot be replaced with higher sample rate of the ADC.

« Last Edit: April 16, 2013, 02:36:38 pm by Gunb »
 

Offline KedasProbe

  • Frequent Contributor
  • **
  • Posts: 646
  • Country: be
Re: Why is 1M waveforms/sec so impressive?
« Reply #30 on: April 16, 2013, 03:13:35 pm »
You are right that nothing else changes including the processing power.
Hence the CPU can't follow and you start to get dead time.
Not everything that counts can be measured. Not everything that can be measured counts.
[W. Bruce Cameron]
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6726
  • Country: nl
Re: Why is 1M waveforms/sec so impressive?
« Reply #31 on: April 16, 2013, 03:18:07 pm »
AFAICS there simply is too little demand for high waveform/s capture and some type of histogram display on the low end ... the purchasers of low end oscilloscopes are not discerning enough.

It's a shame too, the front ends are getting pretty decent ... but the DSP is atrocious.
 

Offline marmad

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Re: Why is 1M waveforms/sec so impressive?
« Reply #32 on: April 16, 2013, 03:38:16 pm »
I have to say, while I can appreciate the technological achievement of 1M wfrm/s, the recent posts by kg4arn showing the X3000's zoomed display of undersampled waveforms makes me wonder what price is being paid for that speed.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8518
  • Country: us
    • SiliconValleyGarage
Re: Why is 1M waveforms/sec so impressive?
« Reply #33 on: April 16, 2013, 03:38:53 pm »
these machines don't use GPU's. not needed. the approach is completely different. you don;t need shaders and polygon stuff . it's not a gaming machine. what you do need is a 2d image composer that has high bandwidth. you can't cram that through pci , you'll be boggin the system down.

i have had a technical explanation on how they do these things. what i am explaining now is based on the infiniium architecture ( and the older 546xx series. ) in essence the megazoom system.

1) let's assume we have infinite acquiition memory. we have a convertor that digitizes the incoming signal at a fixed speed ( as hard as it is designed to go). lets say 1 gigasample per second.
there is a trigger comparator that looks at the data coming from the convertor. when 'trigger' this logic block resets a counter.
the counter controls the address lines of this infinite memory.

so now you have a basic acquisistion system. we have a valid trigger event : reset the counter , at every subsequent clocktick a sample is written to memory and the address counter is incremented so the next sample will fall in the next memory slot.

so far so good ( forget about pretrigeer and other stuff , i'll come back to that later...
this memory has a second address bus and second data bus . this port allows random access to samples in the memory. this bus runs at amuch lower speed. This kind of memory is called a FISO : Fast-in , slow out. it's not a fifo ! data will be overwritten when a trigger comes. the output is not sequential as in a fifo but random as in normal memory. the agilent scopes today still ave this FISO memory, it is on board the a/d convertor and runs at full throttle. pure static ram. the backend pipe goes to the slower ddr ram. ( ddr can't follow the breakneck speed the frontend runs at)

2) we have a display system. the display consists of an lcd deisplay that is refreshing itself at a reasonaby low speed. 60 times a second it can refresh its pixels.

the display gets its data from an image buffer. this buffer is dual ported. this means that the block filling the buffer does not have to wait for the display.

the image buffer itself gets its data from an image composer. and this is where the magic happens.

3) the image composer behaves like a stack of transparencies . remeber the plastic sheets used on a projector. you draw with colored pen. you can put one or more transparencies on top of each other to create a composite image. that is exavlty what the image composer does. it has drawing 'planes'
one plane per channel ( 4 channels is 4 planes ) 1 plane for static images like the graticule , 1 plane for quasi static text , 1 plane for the graphical menuing system and so on.

what you see through the projector is a composite image.

the cpu plots its text and buttons on the allocated planes. it never gets to see the trace data as that is drawn on other planes , by the trace plotter.

4) the trace plotter. for clarity and ease understanding. let's assume our viewport on the lcd display is 16 pixels wide. i know very bad resolution but this is explanation.

there is two thing. timebase and time offset.  timebase dictates how far the points are apart from each other. time offset dictates how far from the trigger point we begin.
if i display at fastst timebae with zero time offset i need to show sample 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 and im done.
if i scroll in time i show sample 25,26,27 .... up till (25+16)

if i decrease timebase 1 step the distance between samples changes.
i now show sample 0,2,4,6,8 ...
take it down another notch and it becomes 0,5,10,15,20,25 .. and so on.
if i scroll now. it could be sample 27,27+5 , 27+10 , 27 +15 .... to 27 + (16x5)

so this image composers goes to to FISO and grabs only the samples it is interested in. it does not care about the rest ! the sample value gives it the vertical position on the screen the pixel needs to be plotted.

it puts a byte there with value 255 (max intenisty )

5) the grader. this blok does the image decay. a sample is a point taken from memory and plotted with max intensity. on every sweep of the lcd refresh all the pixels on the plane are now decremented in value (intensity ! , NOT position ! )
that is how you create the afterglow.
control the speed of the decay  (how many times a second you decrement and by what value you decrement ) and you control image persistence.

and there you have it. the basics of the megazoom system

there ismore at play. the megazoom actually interpolates between groups of pixels so you don't miss spikes

the memory is not infinite , bt is timed in such a way that the a/d sampling speed does not need to come down for a long time. the deeper the memory the longer you can go flat out . once you hit slower timebase speed you hit the end of memeory before a screen refreh and then you need to throttle down the a/d

the image composer keeps writing as hard as it can to the plane. even if the lcd can only refresh 60 times a second the drawing plane can collect millions of waveforms. so that is what they mean with the waveform rate : how many waveforms can we overlay on each other inbetween an lcd refresh cycle. the more you can write there the bigger the chance is one waveform will be the glitch you were looking for. on the next lcd refresh it gets shown on screen and then the decay kicks in..
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline jpb

  • Super Contributor
  • ***
  • Posts: 1771
  • Country: gb
Re: Why is 1M waveforms/sec so impressive?
« Reply #34 on: April 16, 2013, 03:44:15 pm »
AFAICS there simply is too little demand for high waveform/s capture and some type of histogram display on the low end ... the purchasers of low end oscilloscopes are not discerning enough.

It's a shame too, the front ends are getting pretty decent ... but the DSP is atrocious.
I don't think it is a case of not being discerning - it is more a matter of lack of choice and trades off. Agilent have high waveforms/s but are expensive (for purchasers of low-end scopes) so if you have a limited budget you've got to decide between different scope features and waveforms/s is only one of them. The 2000X series were cut down in terms of features too far and Agilent now seems to realise this and are upgrading with more memory and serial decodes as options. The 3000X series are very nice but they are two or three times the price.

Waveforms per sec as a feature is difficult to assess. Agilent push it quite hard as it is one of their selling points, but even at 1M WF/s the scope is blind most of the time at faster time bases so in app notes the rate of occurrence of rare events has to be selected to be infrequent enough for other scopes not to spot them but frequent enough to be seen with 1M WF/s. It is difficult to get objective information as to how often the feature is really needed.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6726
  • Country: nl
Re: Why is 1M waveforms/sec so impressive?
« Reply #35 on: April 16, 2013, 04:12:44 pm »
Just to get a feel of the numbers I'm going to quickly calculate the necessary bandwidth to get limit-case waveform capture on a 8 bit 1 GS/s scope for the simplest type of display (ie. completely binary, no ability to do some type of histogram calculations). AFAICS for every sample you need to do a RMW on a 256 bit word, with each bit indicating whether a signal was ever present at that amplitude at that moment in time after the trigger). So total memory bandwidth 64 GB/s and with access being completely linear any type of commodity DRAM can be used ... have I got that right?

Quite a bit of bandwidth, but doable.
 

Offline Gunb

  • Regular Contributor
  • *
  • Posts: 221
  • Country: de
Re: Why is 1M waveforms/sec so impressive?
« Reply #36 on: April 16, 2013, 04:24:25 pm »
It is difficult to get objective information as to how often the feature is really needed.

Exactly. I would say, higher wfm/s is more wanted than needed.

Many scopes have got only 2000-4000 wfm/s, but they've got plenty of measurement features you need rather than higher wfm/s. That's the reason why the Hameg HMO beats the RIGOL DS4000. I've bought both scopes but wfm/s wasn't an issue at all, rather a nice-to-have add on.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf