| Products > Test Equipment |
| Where is the Keysight Megazoom V ASIC? |
| << < (15/39) > >> |
| nctnico:
--- Quote from: wraper on May 30, 2024, 05:40:47 pm --- --- Quote from: nctnico on May 30, 2024, 03:58:23 pm --- --- Quote from: Kleinstein on May 30, 2024, 08:20:29 am ---I would not bet on a new ASIC. The advantage of an ASIC over an FPGA is possibly lower power consumption at the same performance. However ASICs can not use the high end processes as FPGAs due to the very high mask costs. So the advantage is getting smaller and smaller - if there is any left at all. --- End quote --- Nowadays using a GPU for doing all the heavy lifting is a good choice for building a DSO. Performance, power consumption, production cost, software development cost and flexibility wise there is no match. --- End quote --- GPU is pretty much useless for what Megazoom ASIC does. The problem is not CPU or graphical processing power. --- End quote --- You are forgetting that a GPU is mostly a processor which can do massive parallel calculations quickly because it has lots of processors with a highly optimised memory access channel. Look at languages like Cuda and OpenCL which are used to program massive parallel processors like GPUs. What a GPU can do very efficiently is taking a bunch of acquisitions and process these into a trace in parallel for example. A CPU can't get close to this speed, an FPGA would need a massive amount of logic and development time and neither will reach the energy efficieny of a GPU. And compared to an ASIC, the development cycle is shorter and flexibility is much better for a GPU based solution. The only problem is to get the data into the GPU memory. It is possible though that Keysight has developed an ASIC containing an off-the-shelve GPU solution + ADC interface as a building block for a DSO. Whatever Keysight comes up with has to be a leap forward somehow. |
| wraper:
--- Quote from: nctnico on May 30, 2024, 06:11:10 pm ---You are forgetting that a GPU is mostly a processor which can do massive parallel calculations quickly because it has lots of processors with a highly optimised memory access channel. Look at languages like Cuda and OpenCL which are used to program massive parallel processors like GPUs. What a GPU can do very efficiently is taking a bunch of acquisitions and process these into a trace in parallel for example. A CPU can't get close to this speed, an FPGA would need a massive amount of logic and development time and neither will reach the energy efficieny of a GPU. And compared to an ASIC, the development cycle is shorter and flexibility is much better for a GPU based solution. The only problem is to get the data into the GPU memory. It is possible though that Keysight has developed an ASIC containing an off-the-shelve GPU solution + ADC interface as a building block for a DSO. Whatever Keysight comes up with has to be a leap forward somehow. --- End quote --- Bingo! |
| 2N3055:
--- Quote from: wraper on May 30, 2024, 07:32:54 pm --- --- Quote from: nctnico on May 30, 2024, 06:11:10 pm ---You are forgetting that a GPU is mostly a processor which can do massive parallel calculations quickly because it has lots of processors with a highly optimised memory access channel. Look at languages like Cuda and OpenCL which are used to program massive parallel processors like GPUs. What a GPU can do very efficiently is taking a bunch of acquisitions and process these into a trace in parallel for example. A CPU can't get close to this speed, an FPGA would need a massive amount of logic and development time and neither will reach the energy efficieny of a GPU. And compared to an ASIC, the development cycle is shorter and flexibility is much better for a GPU based solution. The only problem is to get the data into the GPU memory. It is possible though that Keysight has developed an ASIC containing an off-the-shelve GPU solution + ADC interface as a building block for a DSO. Whatever Keysight comes up with has to be a leap forward somehow. --- End quote --- Bingo! --- End quote --- Exactly! |
| nctnico:
--- Quote from: wraper on May 30, 2024, 07:32:54 pm --- --- Quote from: nctnico on May 30, 2024, 06:11:10 pm ---You are forgetting that a GPU is mostly a processor which can do massive parallel calculations quickly because it has lots of processors with a highly optimised memory access channel. Look at languages like Cuda and OpenCL which are used to program massive parallel processors like GPUs. What a GPU can do very efficiently is taking a bunch of acquisitions and process these into a trace in parallel for example. A CPU can't get close to this speed, an FPGA would need a massive amount of logic and development time and neither will reach the energy efficieny of a GPU. And compared to an ASIC, the development cycle is shorter and flexibility is much better for a GPU based solution. The only problem is to get the data into the GPU memory. It is possible though that Keysight has developed an ASIC containing an off-the-shelve GPU solution + ADC interface as a building block for a DSO. Whatever Keysight comes up with has to be a leap forward somehow. --- End quote --- Bingo! --- End quote --- But as I implied: it doesn't make sense to have an FPGA just to interface between a SoC and the ADCs. Most SoCs don't even have a way of accessing the memory with a high enough bandwidth and off-the-shelve FPGAs with GPU + SoC don't offer very high performance per dollar. If your aim is to build a high performance DSO and you have the money to spin an ASIC, it makes a lot of sense to create an ASIC which has the ADC interface, trigger engine, memory interface, processor and GPU on one chip. All of these are off-the-shelve IP blocks you can buy (or Keysight already has) and turn into a chip with relatively low NRE costs. Say somewhere around US$10M to US$50M. You can built an entire line of DSOs based on such a chip where the differentiators are the ADC, analog frontend and supported software options. |
| mikeselectricstuff:
--- Quote from: Kleinstein on May 30, 2024, 08:20:29 am ---I would not bet on a new ASIC. The advantage of an ASIC over an FPGA is possibly lower power consumption at the same performance. --- End quote --- And lower cost, though only after the NRE is revovered. |
| Navigation |
| Message Index |
| Next page |
| Previous page |