Author Topic: Smartphone CPUs in digital oscillscopes?  (Read 1180 times)

0 Members and 1 Guest are viewing this topic.

Offline Scratch.HTF

  • Contributor
  • Posts: 35
  • Country: au
Smartphone CPUs in digital oscillscopes?
« on: March 08, 2018, 11:13:16 pm »
You might notice that the latest smartphone CPUs are quite powerful (and typically even more powerful than the ones used in the Micsig tablet oscilloscopes), and I am just curious if there are any digital oscilloscopes which use smartphone CPUs such as the MediaTek Helio or the HiSilicon KIRIN.
Given that digital oscilloscopes do a LOT of calculations at a high rate (especially with 1M+ point FFT), it makes sense to use a powerful smartphone CPU.
If it runs on Linux, there is some hackability in it.
 

Offline TheUnnamedNewbie

  • Frequent Contributor
  • **
  • Posts: 524
  • Country: be
  • Sending EM through plastic.
Re: Smartphone CPUs in digital oscillscopes?
« Reply #1 on: March 08, 2018, 11:21:20 pm »
Given that digital oscilloscopes do a LOT of calculations at a high rate (especially with 1M+ point FFT), it makes sense to use a powerful smartphone CPU.
Don't quote me on this, but I doubt a smartphone CPU could pull it off, even for the low-end oscilloscopes. The low and mid-range scopes tend to use FPGAs to do the signal processing, the high end stuff uses a combination of custom ASICs and FPGAs to do it.

They then use a lower-power embedded CPU to do the graphical stuff. Higher end models you start to see intel I5/I7 processor, since those scopes are built on top of windows operating systems.

Most cheap scopes seem to be based on something like the ZYNQ chips - FPGA and ARM in one.

Phone CPUs have a bunch of stuff you don't need in a scope (I would imagine things like the connectivity features, GPUs for "gaming" etc).
The best part about magic is when it stops being magic and becomes science instead
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 10776
  • Country: gb
    • Mike's Electric Stuff
Re: Smartphone CPUs in digital oscillscopes?
« Reply #2 on: March 08, 2018, 11:26:48 pm »
Phone CPUs are likely to have short lifetimes and go obsolete frequently due to the nature of the phone market - scopes have a much longer product lifecycle, and is too small for any phone chip manufacturer to keep making them just for that.
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 1096
  • Country: si
Re: Smartphone CPUs in digital oscillscopes?
« Reply #3 on: March 08, 2018, 11:37:40 pm »
These CPUs stop being manufactured as soon as all the major phones that use them close down the production and that's something like 1 to 2 years for phones.

Also phone CPUs are specifically designed to have the peripherals needed in a phone. You get things like dual CSI interfaces for the front and back camera, display output over MIPI DSI. It can be annoying to move a lot of data in and out of it as there are no really high speed general purpose interfaces.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 3337
  • Country: gb
Re: Smartphone CPUs in digital oscillscopes?
« Reply #4 on: March 08, 2018, 11:49:29 pm »
Phone CPUs are likely to have short lifetimes and go obsolete frequently due to the nature of the phone market - scopes have a much longer product lifecycle, and is too small for any phone chip manufacturer to keep making them just for that.
The whole business model for things like phone SoCs, with a short life and a few big customers, is built around the nature of that specific market. Vendors make a new SoC, and promote it directly to the relatively short list of potential customers. Where they get design wins they shape their production around the needs of the customers they achieve. They need to predict the popularity of the phones 3 months ahead, to achieve the right level of wafer starts, which can ramp really fast when the phones are hits, or go like a damp squib. They can't afford to be too off target with these predictions. Fail to supply parts on time for a hit product, and you'll struggle to get the customer to buy again. Overproduce, and you'll end up with a loss on the product. Later they need to predict the wind down of phone production 3 months ahead, so they don't end up with a shed load of chips at the end of life. While playing this juggling act with big buyers, they have little interest in serving people in niches. It would just take their eyes off the goal.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 14131
  • Country: nl
    • NCT Developments
Re: Smartphone CPUs in digital oscillscopes?
« Reply #5 on: March 09, 2018, 05:03:23 am »
Given that digital oscilloscopes do a LOT of calculations at a high rate (especially with 1M+ point FFT), it makes sense to use a powerful smartphone CPU.
Don't quote me on this, but I doubt a smartphone CPU could pull it off, even for the low-end oscilloscopes. The low and mid-range scopes tend to use FPGAs to do the signal processing, the high end stuff uses a combination of custom ASICs and FPGAs to do it.
I think the Xilinx Zync based oscilloscopes from GW Instek and Siglent have proven that to be the wrong way of doing things. Development time for FPGA and ASIC is long and there are limits to what you can do in an ASIC or FPGA anyway. The next thing we'll probably see are oscilloscopes which use a SoC with ARM + CUDA because it is a very cost effective solution which has an extreme amount of processing power.
Like Mike said the downside of smartphone processors is their limited life time but there are alternatives around the corner which don't have this limitation.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tmbinc

  • Regular Contributor
  • *
  • Posts: 130
Re: Smartphone CPUs in digital oscillscopes?
« Reply #6 on: March 09, 2018, 06:05:13 am »
Without turning this into an endless discussion, but the killer reason why GPUs are a bad choice for scopes is that you need fast, low-latency, random-access memory for "phosphor" simulation  ("DPO" etc.). GPUs are typically optimized for different workloads; an exception are maybe ESRAM GPUs, but other than the Xbox One, I'm not aware of a GPU that offers this. Cache memory is not going to help, very unlike most 3D rendering workloads.

The other issue is the data rate - you'd need at least something like PCIe Gen3 x2 for even just one 1GS/s channel (and then you STILL need glue logic to interface your ADC to whatever high speed interface your SoC has). Almost no low-end SoCs offer PCIe at that bandwidth. The GPU isn't going to help a lot with streaming data, so the only operation that could really be CUDA-enabled is FFT. But it seems a cache-optimized FFT algorithm for a CPU will beat most GPUs.

In short, none of the perf-critical work on scopes really maps well to GPUs.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 1096
  • Country: si
Re: Smartphone CPUs in digital oscillscopes?
« Reply #7 on: March 09, 2018, 06:47:18 am »
I think the Xilinx Zync based oscilloscopes from GW Instek and Siglent have proven that to be the wrong way of doing things. Development time for FPGA and ASIC is long and there are limits to what you can do in an ASIC or FPGA anyway. The next thing we'll probably see are oscilloscopes which use a SoC with ARM + CUDA because it is a very cost effective solution which has an extreme amount of processing power.
Like Mike said the downside of smartphone processors is their limited life time but there are alternatives around the corner which don't have this limitation.

Well FPGAs may be a huge pain in the ass to program for but they don't take that much development time. Design of digital ASICs actualy has a lot in common with FPGA programing and some "cheap" ASICs are actually built similar (Its sort of a FPGA fabric with its configuration hardwired like a mask ROM).

But oscilloscopes are very good jobs for FPGAs. To begin with CPUs and SoC devices don't have any good ways of connecting really fast ADCs to them. Its not possible to bitbang the huge fast LVDS bus out of the ADC, let alone 4 ADCs. On the other hand the FPGA can implement any weird ass interface and run it at very high speeds. The data that comes trough that interface is processed on the fly as its coming in to reduce the need for moving it in and out of memory so much. And for when you do need to work with a lot of changing data in memory you can use the memory that's embedded in the FPGA fabric. Because that memory is distributed in the form of small chunks that gives it a massive bandwidth, even a fairly pedestrian FPGA can move things in and out of memory at random access at 1000Gbit/s. Such a thing is pretty useful when merging images of multiple waveforms into a variable persistence scope image.

The high end scopes that run Windows on a PC inside of them still make heavy use of FPGAs(They don't tend to use ASICs because the market is too small). The ADCs pump 10s of GS/s into a big beefy FPGA that shuffles it around and stores it in to external memory chips over a really wide bus. Then typically that FPGA connects to the PC over PCIe since its one of the fastest interfaces on modern PCs and does not need many wires(Just a 4x bus gives many GB/s). When it comes to getting all that data to the screen the PC might do the heavy lifting or in some scopes the FPGA is used to accelerate the process, generally you don't get the super high waveform update rates because of this. The PC also tends to handle the math channels and a lot of serial decode (because these scopes can decode literally 15 to 30 different protocols).

The non PC based scopes usually put the CPU in the copilots seat where the FPGA/ASIC generates the entire picture of the waveform and the CPU just overlays its menus on top of it. That is the only reasonable way to show 1 milion waveforms per second on the screen updating at a buttery smooth 60 fps.
« Last Edit: March 09, 2018, 06:49:07 am by Berni »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 14131
  • Country: nl
    • NCT Developments
Re: Smartphone CPUs in digital oscillscopes?
« Reply #8 on: March 09, 2018, 07:32:41 am »
Without turning this into an endless discussion, but the killer reason why GPUs are a bad choice for scopes is that you need fast, low-latency, random-access memory for "phosphor" simulation  ("DPO" etc.). GPUs are typically optimized for different workloads; an exception are maybe ESRAM GPUs, but other than the Xbox One, I'm not aware of a GPU that offers this. Cache memory is not going to help, very unlike most 3D rendering workloads.

The other issue is the data rate - you'd need at least something like PCIe Gen3 x2 for even just one 1GS/s channel (and then you STILL need glue logic to interface your ADC to whatever high speed interface your SoC has). Almost no low-end SoCs offer PCIe at that bandwidth. The GPU isn't going to help a lot with streaming data, so the only operation that could really be CUDA-enabled is FFT. But it seems a cache-optimized FFT algorithm for a CPU will beat most GPUs.

In short, none of the perf-critical work on scopes really maps well to GPUs.
In the end nobody really cares about DPO and high waveforms rates (and besides that it is relatively a simple tasks for an FPGA). What fits excellent into a CPU/GPU is doing FFT, waveform analysis, math, search, decoding, etc. Typically the stuff you only find on high end PC based scopes. PC based because that platform offers a lot of processing power. In the low end and mid-range market the Xilinx Zync based scopes are the only ones which can do 1MPts FFT and deep memory decoding at a reasonable speed. On other models with deep memory you'll see that advanced features are missing, limited and/or slow. I'm 100% sure that what is missing in the low end and mid-range oscilloscopes is CPU power.

@Berni: I never wrote that there would be no acquisition system involved. Only that a lot of low end and mid-range scopes don't have the processing power to do anything useful with deep memory.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 4837
  • Country: us
Re: Smartphone CPUs in digital oscillscopes?
« Reply #9 on: March 09, 2018, 03:18:51 pm »
I don't actually know what CPUs are used in scopes these days but where the FPGA really shines is pumping large amounts of data between the ADC(s) and RAM without a lot of intervention from the CPU. It can take a pretty significant volume before an ASIC becomes cheaper than an FPGA, and one disadvantage is you can't update the ASIC later.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 1096
  • Country: si
Re: Smartphone CPUs in digital oscillscopes?
« Reply #10 on: March 09, 2018, 05:34:25 pm »
Yeah i have seen some scopes that slow down to a snails pace once you tell them to use anything more than 1Mpts of memory. And its quite common to secretly limit the FFT windows behind the scenes down to 100s of Kpts so that they are easier to calculate.

I do care about waveform update rates tho. Its what gives it that nice analog scope responsive feel. But it only goes up to a certain point. 50K and 1M waveform updates per second is going to look pretty much the same as long as its drawn at a solid 60 fps to the screen. Sure the higher update rate might make it easier to see some glitches but that's about it. As long as its not the 500 updates per second that old digital scopes used to have where the CPU had to do all the heavy lifting of turning samples into pixels I have a scope that is this slow and its one of the major reasons why i tend to use my Agilent MSO6000 more (Well that and the fact that it is loud and takes long to boot).
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 10776
  • Country: gb
    • Mike's Electric Stuff
Re: Smartphone CPUs in digital oscillscopes?
« Reply #11 on: March 09, 2018, 08:08:37 pm »
In the end nobody really cares about DPO and high waveforms rates
Speak for yourself.

Analysis of large acquisitoons can occasionally be useful but for me, high update rate and fast UI response are the things that  contribute most for productivity.
« Last Edit: March 09, 2018, 08:10:08 pm by mikeselectricstuff »
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline TheUnnamedNewbie

  • Frequent Contributor
  • **
  • Posts: 524
  • Country: be
  • Sending EM through plastic.
Re: Smartphone CPUs in digital oscillscopes?
« Reply #12 on: March 09, 2018, 09:17:27 pm »

In the end nobody really cares about DPO and high waveforms rates (and besides that it is relatively a simple tasks for an FPGA).


I'm sorry but what? Waveform rates are vital when you have to catch glitches. The lower that waveform rate the longer (on average) you will have to wait untill you capture a glitch.

DPO can be nice to see visually if there is something strange going on (if you don't know yet if there is).
 I am not going to claim it is vital for everyone, but to say "nobody really cares" is just an incorrect statement.

I also don't get claim that FPGAs are a pain to program. Perhaps if you are completely new to it and if you come from a pure software background, but the digital hardware people I know have done some pretty impressive things in short timespans. I don't see why scope manufacturers can't do the same.

That said, the ZYNQ needs a very close coorperation between software and FPGA team to really get the most out of it.
The best part about magic is when it stops being magic and becomes science instead
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 14131
  • Country: nl
    • NCT Developments
Re: Smartphone CPUs in digital oscillscopes?
« Reply #13 on: March 09, 2018, 09:34:09 pm »

In the end nobody really cares about DPO and high waveforms rates (and besides that it is relatively a simple tasks for an FPGA).

I'm sorry but what? Waveform rates are vital when you have to catch glitches. The lower that waveform rate the longer (on average) you will have to wait untill you capture a glitch.
You can prove mathematically that you can't be 100% sure to catch a glitch if you have to rely on the waveform update rate. For the chance to be 1 the update rate would have to be infinite. Also the waveform update rates are only reached at very specific settings (most of the time in dot mode only) so they are a nice number for the folder but beyond that pretty useless. A scope with over 10k waveforms/s will work just fine (smooth).
And if you want to capture a glitch then use the trigger system or functions which can check for anomalies in a signal. But the latter only works well on scopes which lots of memory AND a decent amount of processing power. Added bonus: you don't have to stare at the screen without blinking your eyes.

I'm not new to programming FPGA. Quite the opposite but I know that if I can implement something in software it will be finished sooner and it will be easier to test. Also it is easier to find software engineers than FPGA engineers. If you have a decent hardware platform then you can implement advanced features using Lua or Python quickly. Besides that advanced features like math and decoding are sequential processes which map better on software compared to an FPGA (which is good at doing things in parallel). There is a reason that complex FPGA designs usually contain a processor which runs software for sequential tasks.
« Last Edit: March 09, 2018, 09:44:23 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 10776
  • Country: gb
    • Mike's Electric Stuff
Re: Smartphone CPUs in digital oscillscopes?
« Reply #14 on: March 09, 2018, 09:41:38 pm »

In the end nobody really cares about DPO and high waveforms rates (and besides that it is relatively a simple tasks for an FPGA).

I'm sorry but what? Waveform rates are vital when you have to catch glitches. The lower that waveform rate the longer (on average) you will have to wait untill you capture a glitch.
You can prove mathematically that you can't be 100% sure to catch a glitch if you have to rely on the waveform update rate.

Nonsense.
Improved waveform rates mean you increase the probability of catching a particular glitch. 95% is way, way more productive than 5%  It's all about getting to the problem faster. 
And whether or not you can get to 100% depends entirely on the statistical occurrence of the glitch. these things are rarely totally random, so if the time between glitches is always greater than the worst-case update interval you will catch it 100% of the time.


Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 14131
  • Country: nl
    • NCT Developments
Re: Smartphone CPUs in digital oscillscopes?
« Reply #15 on: March 09, 2018, 09:55:47 pm »

In the end nobody really cares about DPO and high waveforms rates (and besides that it is relatively a simple tasks for an FPGA).

I'm sorry but what? Waveform rates are vital when you have to catch glitches. The lower that waveform rate the longer (on average) you will have to wait untill you capture a glitch.
You can prove mathematically that you can't be 100% sure to catch a glitch if you have to rely on the waveform update rate.

Nonsense.
Improved waveform rates mean you increase the probability of catching a particular glitch. 95% is way, way more productive than 5%  It's all about getting to the problem faster. 
I'm not saying that waveform update rates are useless but that they have to be put into perspective. With 10k waveforms/s (most modern scopes can do way more) you'll be able to find the glitches which occur often and thus be productive.
Quote
And whether or not you can get to 100% depends entirely on the statistical occurrence of the glitch. these things are rarely totally random, so if the time between glitches is always greater than the worst-case update interval you will catch it 100% of the time.
Still you can't be sure you missed it or not AND you have to keep staring at the screen. If you are hunting for an unknown glitch -for example to check if an interrupt routine never exceeds it's execution time- then setting a trigger for the event will work much better. But let's not derail this thread by going into another more waveforms/s is better discussion.

In my opinion a faster processor in an oscilloscope allows it to do more with lower development costs, lower component cost (*) and thus lower cost to the 'consumer'.

* remember an FPGA has a limited space to put the functionality in and partial reconfiguration only goes so far. You can't simply add more flash to have more features like you can do with software.
« Last Edit: March 09, 2018, 10:18:19 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Hydron

  • Regular Contributor
  • *
  • Posts: 144
  • Country: gb
Re: Smartphone CPUs in digital oscillscopes?
« Reply #16 on: March 10, 2018, 02:09:41 am »
I would see the main advantage of a "Smartphone CPU" (SoC) as being able to use something like Android to built the UI on, rather than creating yet another system from scratch with a poor optimisations for usability (vs leveraging the insane amount of R&D that has gone into making phone OS's fast and intuitive to use).

Waveform processing, ADC interfacing etc belongs in a FPGA/ASIC as previously mentioned. Interfacing to a SoC should be possible using the FPGA to drive whatever is in place for cameras etc, and why not use a screen aimed at tablet use - they will have the right interface, a touch controller and will be a reasonable resolution to boot, unlike the barely-beating-VGA rubbish most scopes come with still. Tasks requiring processing of large portions of the acquisition memory that cannot be done on the FPGA could be tricky though if there's not an easy way to feed it quickly into the SoC (and process it quickly without bogging down the system once there).

Parts going obsolete will be a killer as previously mentioned, though maybe there is something aimed at a slightly different market (e.g. Android based car entertainment systems) that might be better in this respect? Or potentially the acquisition system can be de-coupled from the SoC enough to enable the latter to be replaced with an acceptable realistic amount of NRE? (I am dubious about this given how painful product changes can be).
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 14131
  • Country: nl
    • NCT Developments
Re: Smartphone CPUs in digital oscillscopes?
« Reply #17 on: March 10, 2018, 02:59:02 am »
I would see the main advantage of a "Smartphone CPU" (SoC) as being able to use something like Android to built the UI on, rather than creating yet another system from scratch with a poor optimisations for usability (vs leveraging the insane amount of R&D that has gone into making phone OS's fast and intuitive to use).

Waveform processing, ADC interfacing etc belongs in a FPGA/ASIC as previously mentioned. Interfacing to a SoC should be possible using the FPGA to drive whatever is in place for cameras etc, and why not use a screen aimed at tablet use - they will have the right interface, a touch controller and will be a reasonable resolution to boot, unlike the barely-beating-VGA rubbish most scopes come with still. Tasks requiring processing of large portions of the acquisition memory that cannot be done on the FPGA could be tricky though if there's not an easy way to feed it quickly into the SoC (and process it quickly without bogging down the system once there).
Nonsense. You are talking about 20Mpts to 100Mpts worth of data. Even with 16bit per point transferring the data takes a fraction of a second on a single PCI express 2.0 lane (500Mbyte/s). Considering other interfaces like ethernet, camera, Mipi, etc isn't necessary. Also with a multi core SoC the UI doesn't need to slow down at all to do waveform processing. One core can do the UI and the other(s) can do calculations. The recent GW Instek scopes are a good example to show what can be achieved with a decent amount of processing power on board.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline kirill_ka

  • Regular Contributor
  • *
  • Posts: 71
  • Country: ru
Re: Smartphone CPUs in digital oscillscopes?
« Reply #18 on: March 10, 2018, 06:18:07 am »
I would see the main advantage of a "Smartphone CPU" (SoC) as being able to use something like Android to built the UI on, rather than creating yet another system from scratch with a poor optimisations for usability (vs leveraging the insane amount of R&D that has gone into making phone OS's fast and intuitive to use).
That's funny. I thought that the Android UI is "yet another system from scratch with a poor optimisations for usability". I don't think Android UI has a lot of features to help oscilloscope usability. Linux kernel itself is far from real-time OS. Adding some generic OS services will degrade stability and smoothness.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 1096
  • Country: si
Re: Smartphone CPUs in digital oscillscopes?
« Reply #19 on: March 10, 2018, 07:53:53 am »
I have seen one scope use a CPU meant for mobile devices. They had a clever trick of having the acquisition FPGA emulate a camera so that it could connect to the CPUs camera interface bus. That way lots of data could be transferred very efficiently out of it.

Another possibility is to not even try to get lots of data to the CPU. You can draw your menus in the CPU and output them over a display interface, but connect that interface to the FPGA. There the FPGA composites together the video stream of the menus on to its own video stream of the waveform and then outputs that to the real display. Such a thing is also a good job for a FPGA since it can merge the two video signals on the fly as they are passing trough it.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 14131
  • Country: nl
    • NCT Developments
Re: Smartphone CPUs in digital oscillscopes?
« Reply #20 on: March 10, 2018, 08:08:54 am »
I have seen one scope use a CPU meant for mobile devices. They had a clever trick of having the acquisition FPGA emulate a camera so that it could connect to the CPUs camera interface bus. That way lots of data could be transferred very efficiently out of it.

Another possibility is to not even try to get lots of data to the CPU. You can draw your menus in the CPU and output them over a display interface, but connect that interface to the FPGA. There the FPGA composites together the video stream of the menus on to its own video stream of the waveform and then outputs that to the real display. Such a thing is also a good job for a FPGA since it can merge the two video signals on the fly as they are passing trough it.
That is actually a very old trick. The Tektronix TDS500,600 and 700 series used overlays as well (CPU graphics plane over acquisition system graphics plane) and I'm pretty sure most (if not all) modern DSOs work that way. The biggest downside is that the actual acquisition memory isn't available to the CPU so the FPGA or ASIC has to do all the heavy lifting which limits the features you can built into a DSO.

BTW the MicSig TO1000 series goes a step further and can pipe the FPGA + CPU display output into a hardware video encoder to record a video file of the signal.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf