Author Topic: How good of a DMM or Oscope could you make with modern CPU or GPU ?  (Read 2780 times)

0 Members and 1 Guest are viewing this topic.

Offline MathWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 1426
  • Country: ca
If you take any top CPU or GPU chip, like a 12900k or rtx3090 (a lot more than just the chip), and set them up to work as the core of a DMM or oscilloscope...not in windows, I mean if u designed the DMM/scope with ^^ as the core. How good could they be ?

They all love crunching numbers, but I know CPU's and GPU's don't work exactly the same, nor do DMMs and scope's.
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11248
  • Country: us
    • Personal site
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #1 on: June 26, 2022, 03:11:35 am »
For DMM it would be a total overkill. A regular MCU is plenty for even the most advanced DMM with graphics capabilities.

For the scope - sure, why not? Replacing windows does not buy you much, so you might as well keep it (or replace with Linux, or any other OS). All you need to do in this case is a PCI ADC board. But this is not new, scopes were designed like this for decades.  You won't get a huge boost in practical performance though. Modern design with SoC+FPGA is pretty optimal. If anything, performance is much better scaled with a larger FPGA or a secondary SOC to drive the UI independent from the acquisition.

And power consumption of that PC would be though the roof. And EMI would be a nightmare as well.
Alex
 
The following users thanked this post: Someone

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16611
  • Country: us
  • DavidH
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #2 on: June 26, 2022, 03:44:39 am »
DMM sample rates are not high enough to require that sort of processing power.  Even at thousands of high resolution samples per second, real time FFTs can be done with a battery powered embedded processor.

We already have modern DSOs which use that sort of processing power to generate real time displays at 10s of gigasamples per second.  At a lower sample rate, decimation could be done by the processor in real time instead of an FPGA.  The chief limitation on performance is level 1 and level 2 cache bandwidth with the record size likely limited to a large fraction of the level 2 cache size.
 

Offline robert.rozee

  • Frequent Contributor
  • **
  • Posts: 279
  • Country: nz
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #3 on: June 26, 2022, 05:14:31 am »
i suspect the original poster is thinking of something like this, scaled up:
https://www.eevblog.com/2019/11/13/eevblog-1260-70-100mhz-oscilloscope/

ie, simple front-end, cheap ADC, and a processor with heaps of grunt to do all the heave lifting. in a dense, a bit like how an SDR (software defined radio) works where the analog hardware in minimal and computing power handles everything else.

re-framing the question: given a modern-ish CPU coupled closely to an ADC and a low-complexity front-end, what sort of actual performance could be attained; what would be the realistic top sample rate, and realistic bandwidth?

lets say one started with a Raspberry Pi 3 for instance, and add on a PCB containing 2-channels of input and a pair of ADCs with a maximum parts cost of, say $20. what could be achieved?


cheers,
rob   :-)
« Last Edit: June 26, 2022, 07:02:47 am by robert.rozee »
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11248
  • Country: us
    • Personal site
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #4 on: June 26, 2022, 05:34:57 am »
You can develop all sorts of theories of what could be done with this architecture, but you wont get anywhere close to the performance of the low end "real" scope.

The most significant performance bottle neck (even in the case of that FNRSI scope) is getting the samples into the memory and processing them for a trigger condition. Once trigger is found you can take your time processing the data. Performance of the x86 does not help here at all. You have to "look" at all samples. And cheapest FPGA would do that on the fly with no additional overhead.

And in case of the RPi your first issue would be how to get the samples into the memory.  40 pin connector does not have any interfaces that would be fast enough. Camera interface may be abused for that, but the performance would be pretty limited still.
« Last Edit: June 26, 2022, 05:36:31 am by ataradov »
Alex
 
The following users thanked this post: Someone, 2N3055

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4527
  • Country: au
    • send complaints here
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #5 on: June 26, 2022, 05:51:44 am »
The basic challenge for commodity computer hardware is first bus throughput (how many consumer interfaces carry 4 channels of 10GS/s data? 320Gbit per second) and then what to do with that torrent of data... in short commodity CPUs or GPUs make for really crap scopes compared to task specific ASIC or FPGA designs.
 
The following users thanked this post: Fungus, 2N3055

Online AVGresponding

  • Super Contributor
  • ***
  • Posts: 4661
  • Country: england
  • Exploring Rabbit Holes Since The 1970s
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #6 on: June 26, 2022, 08:07:59 am »
How are you going to compensate for a shit cheap analogue front end with a fast CPU?   :-//

If it were that easy Uni-T would already have a model to compete with a 3458A.
nuqDaq yuch Dapol?
Addiction count: Agilent-AVO-BlackStar-Brymen-Chauvin Arnoux-Fluke-GenRad-Hameg-HP-Keithley-IsoTech-Mastech-Megger-Metrix-Micronta-Racal-RFL-Siglent-Solartron-Tektronix-Thurlby-Time Electronics-TTi-UniT
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #7 on: June 26, 2022, 10:58:12 am »
The basic challenge for commodity computer hardware is first bus throughput (how many consumer interfaces carry 4 channels of 10GS/s data? 320Gbit per second) and then what to do with that torrent of data... in short commodity CPUs or GPUs make for really crap scopes compared to task specific ASIC or FPGA designs.
And yet that is exactly how Lecroy's higher end scopes are built: a relatively dumb sampler and then do all the post processing using CPU + GPU. With ever increasing bandwidths of SOC based systems it is not out of reach to build a low cost scope without doing a lot of processing inside an FPGA. FPGAs take a lot of time to develop and are limited where it comes to resources.
« Last Edit: June 26, 2022, 11:05:57 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline robert.rozee

  • Frequent Contributor
  • **
  • Posts: 279
  • Country: nz
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #8 on: June 26, 2022, 11:04:14 am »
no matter what solution one comes up with, there will be an asian manufacturer somewhere who can produce something cheaper and better. but is that the point of the exercise? i see this more as a thought experiment... what can be achieved with minimal hardware? and within the scope of 'minimal hardware' one must be willing to include a PC (or similar) that can be purchased for just a few dollars.

at the 'bottom end', we know that you can build an oscilloscope of sorts using a modified sound card to achieve a sample rate of around 40ksps. folks have done this. how much higher can this simplistic approach be sensibly pushed? could one get to a 1MHz scope, 5MHz, 20MHz? what 20 or 30 years ago we would have commonly call an 'audio oscilloscope'.


cheers,
rob   :-)
« Last Edit: June 26, 2022, 11:08:12 am by robert.rozee »
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 6626
  • Country: hr
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #9 on: June 26, 2022, 07:04:58 pm »
The basic challenge for commodity computer hardware is first bus throughput (how many consumer interfaces carry 4 channels of 10GS/s data? 320Gbit per second) and then what to do with that torrent of data... in short commodity CPUs or GPUs make for really crap scopes compared to task specific ASIC or FPGA designs.
And yet that is exactly how Lecroy's higher end scopes are built: a relatively dumb sampler and then do all the post processing using CPU + GPU. With ever increasing bandwidths of SOC based systems it is not out of reach to build a low cost scope without doing a lot of processing inside an FPGA. FPGAs take a lot of time to develop and are limited where it comes to resources.
LeCroy scopes have significant FPGA resources inside.. It deals with triggering engine and decimation and corrections etc etc..
It is measurements and display and decodes that are done on CPU... But they do more processing on CPU than anybody else..
 
The following users thanked this post: Performa01

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4527
  • Country: au
    • send complaints here
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #10 on: June 26, 2022, 09:56:09 pm »
The basic challenge for commodity computer hardware is first bus throughput (how many consumer interfaces carry 4 channels of 10GS/s data? 320Gbit per second) and then what to do with that torrent of data... in short commodity CPUs or GPUs make for really crap scopes compared to task specific ASIC or FPGA designs.
And yet that is exactly how Lecroy's higher end scopes are built: a relatively dumb sampler and then do all the post processing using CPU + GPU. With ever increasing bandwidths of SOC based systems it is not out of reach to build a low cost scope without doing a lot of processing inside an FPGA. FPGAs take a lot of time to develop and are limited where it comes to resources.
LeCroy scopes have significant FPGA resources inside.. It deals with triggering engine and decimation and corrections etc etc..
It is measurements and display and decodes that are done on CPU... But they do more processing on CPU than anybody else..
Keysight and Tektronix also had scopes built around capturing the ADC data to a large FIFO in hardware and then processing on a desktop motherboard/CPU combo, all the same pattern: deep memory, high blind time, tools focusing on analysis.

This has come around before (same handles, same stupid arguments): https://www.eevblog.com/forum/testgear/oscilliscope-memory-type-and-why-so-small/?all
Except that in the 5 years since that thread high end scopes have moved more towards hardware/FPGA/ASIC processing and less on the desktop CPU/GPU. Slow processing is clearly an issue, that "faster" desktop computers aren't solving.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #11 on: June 26, 2022, 10:46:06 pm »
It wouldn't surprise me if that statement turns out to be completely wrong for modern day high-end scopes. There is nothing (FPGA or high-end general purpose processor) that beats a GPU where it comes to number crunching performance per Watt. Or maybe the oscilloscope maker have not discovered this yet but at this point the Xilinx Zync based oscilloscopes are at their performance limits and there is no easy way to add extra performance at a low cost. Improvement has to come from moving to a radically different architecture.
« Last Edit: June 26, 2022, 10:53:36 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #12 on: June 26, 2022, 10:49:51 pm »
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4527
  • Country: au
    • send complaints here
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #13 on: June 26, 2022, 11:25:49 pm »
This has come around before (same handles, same stupid arguments): https://www.eevblog.com/forum/testgear/oscilliscope-memory-type-and-why-so-small/?all
Lol
It wouldn't surprise me if that statement turns out to be completely wrong for modern day high-end scopes. There is nothing (FPGA or high-end general purpose processor) that beats a GPU where it comes to number crunching performance per Watt.
Nothing? Its trivial to find comparisons that show GPUs are less energy efficient than FPGAs at numerous (and most) tasks:
"Comparing Energy Efficiency of CPU, GPU and FPGA Implementations for Vision Kernels" 2019,  10.1109/ICESS.2019.8782524
and thats when both are coded in a high level language that has had a focus on GPU processing. There are some tasks where either platform is an obvious leader, and stream data (ADC data correction, trigger, filtering.... scope functions) is where FPGAs are particulalry well suited/efficient.

If you're so smart and GPUs are the answer, why has the scope market moved to FPGAs and ASICs even more than they did in the past?
 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16646
  • Country: 00
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #14 on: June 27, 2022, 02:43:30 am »
That's like asking the chips inside an oscilloscope would make a good graphics card .

At the end of the day they both have lots of processing power but they're both specialized for a particular job. It doesn't follow that they'd be good at doing other things.

« Last Edit: June 27, 2022, 02:46:58 am by Fungus »
 
The following users thanked this post: tooki

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14464
  • Country: fr
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #15 on: June 27, 2022, 03:07:45 am »
The basic challenge for commodity computer hardware is first bus throughput (how many consumer interfaces carry 4 channels of 10GS/s data? 320Gbit per second) and then what to do with that torrent of data... in short commodity CPUs or GPUs make for really crap scopes compared to task specific ASIC or FPGA designs.
And yet that is exactly how Lecroy's higher end scopes are built: a relatively dumb sampler and then do all the post processing using CPU + GPU. With ever increasing bandwidths of SOC based systems it is not out of reach to build a low cost scope without doing a lot of processing inside an FPGA. FPGAs take a lot of time to develop and are limited where it comes to resources.
LeCroy scopes have significant FPGA resources inside.. It deals with triggering engine and decimation and corrections etc etc..
It is measurements and display and decodes that are done on CPU... But they do more processing on CPU than anybody else..

Yeah.

Even so, most scopes these days do a lot in "hardware". Some with dedicated ASICs, some with just FPGAs and off-the-shelf parts.

Tektronix used to use x86 motherboards and Windows for the GUI at some point, like in the TDS510x series.
These days, ARM SOCs are common and powerful enough in combination with the above hardware to handle even a sophisticated scope.
A very common approach at least in the low and even mid-range scopes is to use those ARM core(s)+FPGA hybrids such as the Xilinx Zynq parts.

Scopes requiring more graphics power can use ARM SOCs with an integrated GPU.

Approaches vary slightly at least for the higher-end scopes, depending on the vendor. Both for performance, and for marketing reasons.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16611
  • Country: us
  • DavidH
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #16 on: June 27, 2022, 03:16:40 am »
re-framing the question: given a modern-ish CPU coupled closely to an ADC and a low-complexity front-end, what sort of actual performance could be attained; what would be the realistic top sample rate, and realistic bandwidth?

lets say one started with a Raspberry Pi 3 for instance, and add on a PCB containing 2-channels of input and a pair of ADCs with a maximum parts cost of, say $20. what could be achieved?

The problem is not processor power.  Real time DSOs have been implemented with 16-bit 8088 class processors.  Today even a mediocre ARM has two order of magnitude better performance.

The problem is capturing the high sample rate acquisition record into memory where the processor can get to it.  Most processors either lack a suitable interface, or the external interface logic is so complex that there is no reason not to move decimation and other functions to the same dedicated external logic, which is what early DSOs did and what most modern DSOs do.

So find a processor that can interface directly to two or more 8-bit ADCs at 100s of megasamples per second each.  Does anybody make a PCIe ADC?
 

Offline thm_w

  • Super Contributor
  • ***
  • Posts: 6359
  • Country: ca
  • Non-expert
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #17 on: June 27, 2022, 11:33:31 pm »
Any sort of decent PCIE ADC I can find is basically a scope on a card with FPGA already, or has very low sample rate, etc.

https://acqiris.com/8-bit-adc-card/
https://www.ni.com/en-ca/shop/hardware/products/pxi-analog-input-module.html (probably wont fit in a normal PC but same idea)
https://www.ni.com/en-ca/shop/pxi/pxi-modules.html
https://www.acquitek.com/product/px1500-2-2/
https://spectrum-instrumentation.com/products/families/22xx_m4i_pci.php

The last one is sort of relevant to OP, as it says "Direct data transfer to CUDA GPU using SCAPP option"

Then of course there is the issue that its going to be a niche card, which means its expensive, already more so then an equivalent full scope.
The idea is good, but, I can't see it being executed in any normal budget.

The closest execution might be those low cost VGA USB3 devices, I think you could use those as high speed DACs.
Profile -> Modify profile -> Look and Layout ->  Don't show users' signatures
 

Offline MathWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 1426
  • Country: ca
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #18 on: July 22, 2022, 04:59:13 am »
I forgot about this thread, lots interesting answers. So a desktop CPU/GPU just isn't built for making the best scope or DMM. And if a dream team at intel decides to make a scope/DMM, they would probably just cook up a new chips then.

And yeah I'm impressed by the speed of the latest PC tech, but I forget, there are muilti-GHz oscilloscopes, that cost more than most houses in Canada.

I'm just reading the thread "Oscilliscope memory, type and why so small?", and it reminds me tho of some user interface limitations, that DMM and scopes still have vs the entire PC package, or laptop even. All my Siglent stuff can be plugged into the PC, but it's a long ways from if Siglent said, lets make a scope/DMM/PSU/AWG/etc , all in one box like a PC, with a big screen, and KB/mouse, and full GUI, for those that want more than command line stuff.


I'm sure the tech people from Intel/AMD meet up with the tech guys from Fluke, Tektronix etc, and think about this too.
 

Online ebastler

  • Super Contributor
  • ***
  • Posts: 6459
  • Country: de
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #19 on: July 24, 2022, 06:31:30 am »
And if a dream team at intel decides to make a scope/DMM, they would probably just cook up a new chips then.

Or they might decide to buy an FPGA company like Altera. Oh, wait...


(No, I don't think Intel bought them because they plan to enter the scope market. ;)  But they obviously realize that there are things that an FPGA does much better than a CPU or GPU. I wonder whether we should expect SoCs with an Intel CPU and Altera FPGA core at some point? The current "Intel" SoCs are Altera designs using ARM CPU cores, of course.)
« Last Edit: July 24, 2022, 06:39:02 am by ebastler »
 
The following users thanked this post: Someone, 2N3055

Online switchabl

  • Frequent Contributor
  • **
  • Posts: 439
  • Country: de
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #20 on: July 24, 2022, 08:50:28 am »
All my Siglent stuff can be plugged into the PC, but it's a long ways from if Siglent said, lets make a scope/DMM/PSU/AWG/etc , all in one box like a PC, with a big screen, and KB/mouse, and full GUI, for those that want more than command line stuff.

That sounds a lot like the NI VirtualBench. But that is really a niche product, aimed mostly at the educational market. I am pretty sure most engineers still prefer dedicated instruments with knobs and buttons to one big screen + mouse for bench use.

ATE is a different story but that is why PXI(e) systems exists (essentially a chassis with a built-in PC and any number of instrument modules connected through PCIe).

Also, based on previous experience with PC-based test equipment, the cynic in me wants to say that the answer to the original question is: a big, loud, expensive one that always wants to install some Windows updates.
« Last Edit: July 24, 2022, 09:36:33 am by switchabl »
 

Online artag

  • Super Contributor
  • ***
  • Posts: 1070
  • Country: gb
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #21 on: July 24, 2022, 10:04:53 am »
Kind of offtopic, but one aspect of pc-driven oscilloscopes is their poor update rate. Standalone instruments make the number of waveforms/second a major feature, both because it gives a better visual imitation of an analogue oscilloscope and because it influences the trigger  / rearm timing.

You can achieve equivalent trigger performance internally  and update the screen at a communications-limited rate. The dat can often be sent to the PC compressed, but consider that most PCs have either hardware acceleration for mpeg-encoded data or very effective software (including GPU) decoders.

Why not make use of this capability and send the data as mpeg ? The images on a scope screen are very simple compared with a full-colour real-lifge image and I'm assuming they'll encode correspondingly fast.
 
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #22 on: July 24, 2022, 10:33:16 am »
Turn that thought around: just like dealing with MPEG, a GPU can also build a scope screen from acquired data really fast. The 'trick' is to get the data into the GPU. What basically is needed is a way to feed the acquired data from the ADC into the GPU memory through PCI express.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online ebastler

  • Super Contributor
  • ***
  • Posts: 6459
  • Country: de
Re: How good of a DMM or Oscope could you make with modern CPU or GPU ?
« Reply #23 on: July 24, 2022, 11:12:03 am »
Well, and the acquisition system must comprise not just the ADC and a memory buffer, but the trigger logic as well. As pointed out by others above, deciding in real time when to store and display data is the Achilles heel of a purely CPU based design.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf