Author Topic: A High-Performance Open Source Oscilloscope: development log & future ideas  (Read 28657 times)

0 Members and 2 Guests are viewing this topic.

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 3638
  • Country: hr
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #250 on: December 18, 2020, 09:20:32 am »
Digital scope with a screen (distinction from digitizer that samples data inside data acquisition system) need to serve two functions:
- emulate behaviour of a CRT oscilloscope on a screen
- function as a digitizer in a background, so that all data that it captured is sampled properly and doesn't contain any nonsense in mathematical way.

First point is well served with decimating to screen with peak detect.
Second one is served well with large buffer that ensures highest sampling most of the time, and by downsampling by filtering to ensure that there are no aliasing artefacts in data that was sampled with lower sample rate.
In which case there must be obvious warning that at his timebase you're working with limited bandwidth. And also a way to disable filtering to have simple decimation by sample discarding. Because that is raw data, people expect sometimes.

There is  no simple, single solution for all. 

For instance, RMS measurements should be performed of full speed sample data, to take into account all high energy content..
Risetime needs fastest edge info it can have.. Etc..
Current scopes do all kinds of compromises to cater to their optimization targets...

 

Online JohnG

  • Frequent Contributor
  • **
  • Posts: 373
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #251 on: December 18, 2020, 02:34:21 pm »
Makes sense - but, in that case, why not omit the input filter altogether and allow the user to cautiously use their instrument up to Nyquist?  All filters risk eliminating signals that you intend to look at - part of operating a scope is understanding approximately what you expect to appear on the screen before you even probe it.

Because most of the time, for general purpose, you will want the antialias filter in place. The ability to bypass it might be nice, though.

Cheers,
John
"Those who learn the lessons of history are doomed to know when they are repeating the mistakes of the past." Putt's Law of History
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: nl
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #252 on: December 18, 2020, 05:31:10 pm »
Simply rendering all samples with intensity shading is much better than decimation for showing the shape of a modulated signal.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4373
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #253 on: December 18, 2020, 05:31:53 pm »
That makes sense.  So, as I understand it, the modes that need to be supported are:

Normal - Applies a downsampling filter* when sampling rate is under the normal ADC rate, otherwise, does not filter
Decimate - Applies only decimation, i.e. dropping samples when sampling rate is under the normal ADC rate,  otherwise identical to Normal
ERES - Averages consecutive samples to increase sample resolution up to 16 bits depending on memory availability
Average - Averages consecutive waveforms to compute one output waveform;  otherwise behaves like Normal mode in terms of decimation/downsampling
Peak detect - Records a min and max during decimation and stores these instead of individual samples.  Halves available memory.

*Exact design of this filter to be worked out (quite possibly CIC given the simplicity?)

Some consideration needs to be made in regards to supporting 12-bit/14-bit modes but they would require external filters unless aliasing is permitted in these modes.

Note that the downsampling would be needed once the timebase exceeds a total timespan of ~240ms, or about 20ms/div, on the current prototype with ~240Mpts memory available.   With 4x the memory, downsampling is still needed once beyond 50ms/div.  Hard to get around the tremendous amount of memory that just sampling at 1GSa/s requires.

In all modes, certain auto measurements can work without acquiring to memory and therefore can work at the full sample rate.  These are:
- The frequency counter
- Vmax, Vmin, Vp-p
- Vrms
- Vavg

though bounding by cycles (e.g. Vrms over a cycle of a wave) does require memory acquisition and therefore would be affected by sampling modes.

Have I missed anything?
 

Online gf

  • Frequent Contributor
  • **
  • Posts: 468
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #254 on: December 18, 2020, 06:40:30 pm »
Normal - Applies a downsampling filter* when sampling rate is under the normal ADC rate, otherwise, does not filter
Decimate - Applies only decimation, i.e. dropping samples when sampling rate is under the normal ADC rate,  otherwise identical to Normal
ERES - Averages consecutive samples to increase sample resolution up to 16 bits depending on memory availability
Average - Averages consecutive waveforms to compute one output waveform;  otherwise behaves like Normal mode in terms of decimation/downsampling
Peak detect - Records a min and max during decimation and stores these instead of individual samples.  Halves available memory.

I don't see a principal difference between "Normal" and "ERES". Both decimate with prior filtering. Variables are the kind and order of the filter, and number of bits (>= # ADC bits) per sample being stored (which has an impact on memory consumption).

I would consider "Average" not as separate mode, but rather as optional step in the acquisition pipeline, which can be combined with either Normal, Decimate or ERES (it does not make sense in conjunction with peak detect, of course). Since averaging increases the dynamic range as well, one may also consider to store the data with more bits per sample than delivered by the previous stage in the acquisition pipeline.

EDIT:

Quote
Note that the downsampling would be needed once the timebase exceeds a total timespan of ~240ms, or about 20ms/div, on the current prototype with ~240Mpts memory available.   With 4x the memory, downsampling is still needed once beyond 50ms/div.  Hard to get around the tremendous amount of memory that just sampling at 1GSa/s requires.

There needs to be some default, but IMO the user should still be able to control the trade-offs between acquisition mode, sampling rate (of the stored samples), record size, and number of records that can be stored (within the feasible limits).
« Last Edit: December 18, 2020, 07:13:47 pm by gf »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21236
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #255 on: December 18, 2020, 07:22:21 pm »
Where it comes to filtering it may be better to do this as a (first) post processing step before any other operation. From my experience it is useful to be able to adjust the filtering on existing acquisition data (GW Instek does this). Care must be taken though to avoid start-up issues and use real data. The R&S RTM3004 for example filters decimated data and doesn't take initial filter initialisation into account leading to weird behaviour and thus limiting the usefullness of filtering.

Averaging is another interesting case. One of the problems is that ideally you'd save the averaged data so you can scroll left/right zoom in / out. On some oscilloscopes (again R&S RTM3004) the averaged trace dissapears if you move the trace. I second the suggestion to be able to combine acquisitions modes but at some point you'll be creating a new trace in CPU memory and using the acquisition data only to update the trace in CPU memory.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online gf

  • Frequent Contributor
  • **
  • Posts: 468
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #256 on: December 18, 2020, 08:14:12 pm »
Where it comes to filtering it may be better to do this as a (first) post processing step before any other operation. From my experience it is useful to be able to adjust the filtering on existing acquisition data (GW Instek does this).

For pre-decimation filtering this would imply that the data would need to be stored at the full sampling rate, in order that the first processing steps can filter and decimate them. This likely defends the purpose (lower memory usage), why a lower sampling rate than the maximum was selected.

For any other kind of filtering which does not need to be done prior to decimation, I'm basically with you.

Quote
Care must be taken though to avoid start-up issues and use real data. The R&S RTM3004 for example filters decimated data and doesn't take initial filter initialisation into account leading to weird behaviour and thus limiting the usefullness of filtering.

That's a general issue when filtering is done in the post-processing, on the stored data, where only a set of records, but no continuous stream of data is available. But where should the initial filter state come from? Do you want to ask the user to enter initial values for all state variables of the filter? Like: "Please enter the values of the 199 samples preceding the captured buffer" (for a 200 tap FIR filter). Another alternative could be to simply discard the samples falling into the fade-in/fade-out time interval of the filter, hereby reducing the record size, of course.

EDIT:

Quote
Averaging is another interesting case. One of the problems is that ideally you'd save the averaged data so you can scroll left/right zoom in / out. On some oscilloscopes (again R&S RTM3004) the averaged trace dissapears if you move the trace. I second the suggestion to be able to combine acquisitions modes but at some point you'll be creating a new trace in CPU memory and using the acquisition data only to update the trace in CPU memory.

The question is at which waveform rate the averaged data are supposed to be recorded.

(1) At full trigger rate (storing a moving average)?
(2) At 1/N of the trigger rate (storing only a single averaged buffer after acquiring N triggers).

In case (1) the acquisition engine could equally well store just the triggered waveforms, and the processing engine could average them.

In case (2) the averaging would need to be done in a pipeline stage of the acquisition engine.
This mode saves memory, but at the cost of a lower waveform rate.
« Last Edit: December 18, 2020, 08:38:27 pm by gf »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21236
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #257 on: December 18, 2020, 08:26:09 pm »
Skipping 1000 samples at the beginning and another 1000 at the end of a 100Mpts long record is something nobody will notice.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online gf

  • Frequent Contributor
  • **
  • Posts: 468
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #258 on: December 18, 2020, 08:50:35 pm »
Skipping 1000 samples at the beginning and another 1000 at the end of a 100Mpts long record is something nobody will notice.

Agreed, no problem for suffiiently long records, granted that the buffer management does not impose incompatible record length constraints (like e.g. all record lengths must be power of two, or all records must have the same fixed size,...).
 

Online gf

  • Frequent Contributor
  • **
  • Posts: 468
  • Country: de
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #259 on: December 18, 2020, 09:05:20 pm »
Normal - Applies a downsampling filter* when sampling rate is under the normal ADC rate, otherwise, does not filter
Decimate - Applies only decimation, i.e. dropping samples when sampling rate is under the normal ADC rate,  otherwise identical to

I would actually tend to use the name "Normal" for the non-filtered mode.
[ Sure, names signify nothing - it were just my personal preference. I wonder how other think about it. ]
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4373
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #260 on: December 18, 2020, 09:57:56 pm »
Digital filtering would be post-acquisition using a customisable FIR filter.   There would necessarily be a dead zone at the start and end of each trace depending on the number of taps in the filter.  So in my example provided before with 200 taps and a 1232 point waveform (presently what is used at 100ns/div) then about half of the waveform is lost to filter tap count.  I can't see any feasible way to avoid this - the waveforms are not correlated in time and no data is available outside of their window.  The window of data could be increased, but you may as well just go up a timebase if you wanted to do that.  But I think it's fair to say that at short timebases you don't generally need long filters,  and therefore this will be much less of an issue in the real world.

You are right that averaging may well be considered a subclass of post-acquisition filtering so it doesn't make so much sense to have it as an acquisition mode.  Although, it would be possible to do it during acquisition, it would probably complicate the acquisition engine compared to just reading values out into a FIFO and summing them in a filter pipeline (although there would be a lot of reading and writing, so I need to think about how to make this as optimal as possible.)

Also a good point on normal vs ERES, although I think the subtle difference is normal stores 8-bit samples whereas ERES would halve the available sample depth by storing 16-bit samples.  The penalty being primarily memory, although there may also be a render speed penalty.

Also, the buffer management supports arbitrary buffer lengths, the only restriction is that they are a multiple of 8 samples for pre and post trigger and the overall buffer must be on a 64 byte boundary (but size is not constrained by this) to fit into cache lines.  The records must all be the same size for now, although this is just a programming simplification, there's no strict need for it.
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 21637
  • Country: nz
  • Taupaki Technologies Ltd. NZ Siglent Distributor
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #261 on: December 18, 2020, 10:11:25 pm »
Tom, maybe you forget Averaging and ERES are both Math operations and as such are now in the Math menu in SDS5000X.
Avid Rabid Hobbyist
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21236
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #262 on: December 18, 2020, 10:31:24 pm »
Tom, maybe you forget Averaging and ERES are both Math operations and as such are now in the Math menu in SDS5000X.
Those are Lecroy-isms which are more geared towards signal analysis. On other oscilloscopes averaged / high-res traces replace the channel's trace while retaining the channel's color. There are pros and cons. The pro is that you see the 'original' trace and can have multiple representations of the same signal (in different math channels) the con is that a math trace usually has a different color which does not resemble the original trace and you have a trace on the screen which may not be relevant at all.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 21637
  • Country: nz
  • Taupaki Technologies Ltd. NZ Siglent Distributor
    • Taupaki Technologies Ltd.
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #263 on: December 18, 2020, 10:33:32 pm »
Tom, maybe you forget Averaging and ERES are both Math operations and as such are now in the Math menu in SDS5000X.
Those are Lecroy-isms which are more geared towards signal analysis. On other oscilloscopes averaged / high-res traces replace the channel's trace while retaining the channel's color. There are pros and cons. The pro is that you see the 'original' trace and can have multiple representations of the same signal (in different math channels) the con is that a math trace usually has a different color which does not resemble the original trace and you have a trace on the screen which may not be relevant at all.
None of which is an issue if you can assign any color to a trace.
Avid Rabid Hobbyist
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4373
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #264 on: December 18, 2020, 10:43:17 pm »
Averaging and ERES could both be done before or after the fact,  but implementing both seems a bit silly and logic-expensive. 
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21236
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #265 on: December 18, 2020, 10:59:31 pm »
Tom, maybe you forget Averaging and ERES are both Math operations and as such are now in the Math menu in SDS5000X.
Those are Lecroy-isms which are more geared towards signal analysis. On other oscilloscopes averaged / high-res traces replace the channel's trace while retaining the channel's color. There are pros and cons. The pro is that you see the 'original' trace and can have multiple representations of the same signal (in different math channels) the con is that a math trace usually has a different color which does not resemble the original trace and you have a trace on the screen which may not be relevant at all.
None of which is an issue if you can assign any color to a trace.
But it will clutter the screen and make operation less intuitive. I own a Lecroy Wavepro 7300A myself but I can't say it is a nice oscilloscope as a daily driver. Setting up averaging / hires takes a lot of button pushes and going through menus while on other scopes it is a simple selection in the acquisition menu. How it is implemented under the hood is a different story (both are likely implemented as math traces) but the complexity is hidden from the user.

Averaging and ERES could both be done before or after the fact,  but implementing both seems a bit silly and logic-expensive. 
It can make sense in some cases. Remember Eres / high res is filtering in the frequency domain while averaging works in the time domain. But it is true very few oscilloscopes allow to use both at the same time. Besides the Lecroy (using stacked math traces) the R&S RTM3004 is the only one I know supports enabling both high-res and averaging at the same time.
« Last Edit: December 18, 2020, 11:03:59 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4373
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #266 on: December 18, 2020, 11:11:00 pm »
That perhaps wasn't clear.  I was referring to allowing averaging and ERES to be done both pre- and post- acquisition, in other words you could choose when the filter was applied.  I think there's little good reason to support both options, the decision has to be made to support one or the other.

Enabling trace averaging and ERES at the same time sounds plausible enough although I'd question what the user was attempting to achieve with such a move - averaging itself implements a form of ERES just with the time correlation of a trigger...  There may be use cases but I can't think of many...
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 2709
  • Country: au
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #267 on: December 19, 2020, 03:46:46 am »
That perhaps wasn't clear.  I was referring to allowing averaging and ERES to be done both pre- and post- acquisition, in other words you could choose when the filter was applied.  I think there's little good reason to support both options, the decision has to be made to support one or the other.
..getting off into the "religious wars" of scopes at that point, there are very strong reasons to do the filtering before storing the result:
Its fast, you can collect more data. It stores fewer samples per time period, increasing possible memory depth.
Equally there are good reasons to do this post-processing:
You can look through the individual (or higher sample rate data) that would have been discarded if done online.

This same tradeoff is present for persistence rendering, eye-diagrams, measurements, etc.

Somewhere in the middle is dumping all the raw data to a circular history buffer, while also computing in hardware the data for display. Which in a resource constrained system/FPGA takes away from some other characteristic of the system.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21236
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #268 on: December 19, 2020, 11:07:14 am »
That perhaps wasn't clear.  I was referring to allowing averaging and ERES to be done both pre- and post- acquisition, in other words you could choose when the filter was applied.  I think there's little good reason to support both options, the decision has to be made to support one or the other.
The way I see it time would be better spend on a more versatile trigger engine (that needs to be inside the FPGA) and getting more processing power & higher memory bandwidth between CPU and acquisition system right now if you want to write code. That will allow to do post processing in software quickly. Probably with better performance and in less development time compared to what the FPGA can achieve. The biggest advantage of doing post processing is that you can alter the settings and the result will change on-the-fly (this won't be possible for all operations like averaging but for most of the operations it will). From my experience with oscilloscopes post processing leads to the highest flexibility for the operator.

Before doing anything on signal processing there should be a clear architecture on paper which answers all questions on how to deal with the signals, the operations on them (measurement, math, decoding, etc) and how that is displayed (rendering). It is extremely easy to take a wrong turn and get stuck (Siglent has done that). Oscilloscope firmware is one of the most complex pieces of software to create. Starting from a flexible architecture which may not offer the highest performance is mandatory IMHO.
« Last Edit: December 19, 2020, 11:39:54 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4373
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #269 on: December 19, 2020, 01:28:36 pm »
Yes, there's definitely something to be said for that.

I have been thinking about the current acquisition engine.  It isn't fit for the task because it naively stores samples in interleaved fashion in memory,  directly received from the ADC.  This makes reading memory out a real pain, because you have to discard unused samples or find some way to 'deinterlace' them while reading.  A better route would be to record each channel in its own buffer.  In 1ch mode, there would be only one buffer, and all data would be stored in that.  In 2ch mode the buffers would be strided by the waveform pitch (so ch1 stored first then ch2), 3/4 channel modes behave similarly.

I think supporting variable waveform lengths would be a pain however there may be a case for supporting dual timebase operations - though the 'dual timebase' section might need to be stored in blockRAM FIFOs and so be limited in memory depth.

Data can be read out in order then and processed by a filter pipeline configured by the CPU.  The data could then be written back into RAM.  The filter pipeline would be capable of performing a few basic operations on each channel (sum or multiply any pair of channels) which is possible because it can have two read pointers.  Supporting 3-4 channel operations would also be *in principle* possible but considerably more complex.

The data is still going to be ordered incorrectly for the pre-trigger so an address translator needs to start reading at the correct address, and the number of words to be read might not be a nice even multiple of 4 or 8, which poses some difficulties.

While the data that is written to RAM would be buffered by a number of FIFOs, a counter would keep track of the trigger position and state and this would be recorded into a separate area of memory and used to correct trigger positions.   Actually, the most difficult aspect of this is solving the FIFO headache, the FIFO needs a dynamically configurable width from 64 bits to 8 bits and the output width should be fixed to 64 bits, and a control channel of 4 bits needs to be passed through at the same time with identical timing.  Changing this on the fly using traditional Xilinx IP is not possible (AFAIK) so I may have to roll my own FIFO.

And the acquisition channels need to be able to discard a variable sample count for decimation modes or enable a CIC/ERES filter ...

« Last Edit: December 19, 2020, 01:34:06 pm by tom66 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21236
  • Country: nl
    • NCT Developments
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #270 on: December 19, 2020, 03:26:26 pm »
You can seperate the FIFO if you create chunks of data. So the FIFO (and storage system) always works with a fixed size (say 256 bytes to 1024 bytes). In my design I even went a step further and used records which could contain various types of data (decimated, decoded, digital channels, different bit widths). These records (which could have different data rates!) where streaming into the memory from several FIFOs. The upside is that the memory doesn't need to care what part is for which channel but the downside is that you'll have to read all the data even if you are interested in a particular type of data (for example re-decode from channel 1) and there is some overhead but memory is cheap nowadays.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4373
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #271 on: December 19, 2020, 06:40:19 pm »
The goal would be to have the data in linear planes so all ch1 data for a given acquisition would be in order, followed by ch2, ch3 and so on.    I'm not too worried about where individual waveform groups are, but each channel should be in a separate plane.  That way, when data is read out, it is in order (besides the need to rotate for pre-triggers.)  In theory, I can then have another, say, 16-bit side channel for MSO functions, which is on ADC clock. (It would also be possible to do state analysis for MSO function using this, although it might be difficult to line that up with analog channels at that point.)

This is something I wanted to do a while ago, but the complexity put me off.  But, I'm realising what a pain it is to have to deal with interlaced data when it comes to plotting data and processing it afterwards with filters and the like.

I think one of the biggest challenges to solve is memory arbitration,  given one 64-bit AXI bus has 1.6GB/s peak bandwidth I'll need to appropriately arbitrate, possibly across two ports, to make this work well, to avoid running out of bandwidth as more time will be sent setting up smaller transactions.
« Last Edit: December 19, 2020, 06:45:34 pm by tom66 »
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3137
  • Country: us
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #272 on: December 21, 2020, 08:18:49 pm »
A mux/demux (serial/parallel) conversion at any point in a DSP pipeline is cheap to do and often both are done to optimize resource utilization in a gate level design.

If you are downsampling, which is the usual case, you have multiple fabric cycles available for each output sample.  A 2x downsample allows two ops per cycle, 4x four ops, etc. 

The writer only has to deal with the addressing once.  The reader has to do it every time, so for efficiency the data need to be in reader optimal order.

Have Fun!
Reg
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4373
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #273 on: December 22, 2020, 03:45:59 pm »
Agreed.  It's just down to getting that to work fast.  There are plenty of ways to do this slowly - it's more difficult to do this when you need to process over a billion samples per second.

 

Offline dougg

  • Regular Contributor
  • *
  • Posts: 54
Re: A High-Performance Open Source Oscilloscope: development log & future ideas
« Reply #274 on: December 25, 2020, 08:36:36 pm »
Just in case this link is useful and you don't already know about it:
https://www.ti.com/tool/TIDA-00826
which is titled "50-Ohm 2-GHz Oscilloscope Front-end Reference Design". From the associated pdf the design definitely looks non-trivial.
 
The following users thanked this post: tom66, JohnG


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf