Products > Test Equipment
A High-Performance Open Source Oscilloscope: development log & future ideas
<< < (54/71) > >>
nctnico:

--- Quote from: tautech on December 18, 2020, 10:33:32 pm ---
--- Quote from: nctnico on December 18, 2020, 10:31:24 pm ---
--- Quote from: tautech on December 18, 2020, 10:11:25 pm ---Tom, maybe you forget Averaging and ERES are both Math operations and as such are now in the Math menu in SDS5000X.

--- End quote ---
Those are Lecroy-isms which are more geared towards signal analysis. On other oscilloscopes averaged / high-res traces replace the channel's trace while retaining the channel's color. There are pros and cons. The pro is that you see the 'original' trace and can have multiple representations of the same signal (in different math channels) the con is that a math trace usually has a different color which does not resemble the original trace and you have a trace on the screen which may not be relevant at all.

--- End quote ---
None of which is an issue if you can assign any color to a trace.

--- End quote ---
But it will clutter the screen and make operation less intuitive. I own a Lecroy Wavepro 7300A myself but I can't say it is a nice oscilloscope as a daily driver. Setting up averaging / hires takes a lot of button pushes and going through menus while on other scopes it is a simple selection in the acquisition menu. How it is implemented under the hood is a different story (both are likely implemented as math traces) but the complexity is hidden from the user.


--- Quote from: tom66 on December 18, 2020, 10:43:17 pm ---Averaging and ERES could both be done before or after the fact,  but implementing both seems a bit silly and logic-expensive. 

--- End quote ---
It can make sense in some cases. Remember Eres / high res is filtering in the frequency domain while averaging works in the time domain. But it is true very few oscilloscopes allow to use both at the same time. Besides the Lecroy (using stacked math traces) the R&S RTM3004 is the only one I know supports enabling both high-res and averaging at the same time.
tom66:
That perhaps wasn't clear.  I was referring to allowing averaging and ERES to be done both pre- and post- acquisition, in other words you could choose when the filter was applied.  I think there's little good reason to support both options, the decision has to be made to support one or the other.

Enabling trace averaging and ERES at the same time sounds plausible enough although I'd question what the user was attempting to achieve with such a move - averaging itself implements a form of ERES just with the time correlation of a trigger...  There may be use cases but I can't think of many...
Someone:

--- Quote from: tom66 on December 18, 2020, 11:11:00 pm ---That perhaps wasn't clear.  I was referring to allowing averaging and ERES to be done both pre- and post- acquisition, in other words you could choose when the filter was applied.  I think there's little good reason to support both options, the decision has to be made to support one or the other.
--- End quote ---
..getting off into the "religious wars" of scopes at that point, there are very strong reasons to do the filtering before storing the result:
Its fast, you can collect more data. It stores fewer samples per time period, increasing possible memory depth.
Equally there are good reasons to do this post-processing:
You can look through the individual (or higher sample rate data) that would have been discarded if done online.

This same tradeoff is present for persistence rendering, eye-diagrams, measurements, etc.

Somewhere in the middle is dumping all the raw data to a circular history buffer, while also computing in hardware the data for display. Which in a resource constrained system/FPGA takes away from some other characteristic of the system.
nctnico:

--- Quote from: tom66 on December 18, 2020, 11:11:00 pm ---That perhaps wasn't clear.  I was referring to allowing averaging and ERES to be done both pre- and post- acquisition, in other words you could choose when the filter was applied.  I think there's little good reason to support both options, the decision has to be made to support one or the other.

--- End quote ---
The way I see it time would be better spend on a more versatile trigger engine (that needs to be inside the FPGA) and getting more processing power & higher memory bandwidth between CPU and acquisition system right now if you want to write code. That will allow to do post processing in software quickly. Probably with better performance and in less development time compared to what the FPGA can achieve. The biggest advantage of doing post processing is that you can alter the settings and the result will change on-the-fly (this won't be possible for all operations like averaging but for most of the operations it will). From my experience with oscilloscopes post processing leads to the highest flexibility for the operator.

Before doing anything on signal processing there should be a clear architecture on paper which answers all questions on how to deal with the signals, the operations on them (measurement, math, decoding, etc) and how that is displayed (rendering). It is extremely easy to take a wrong turn and get stuck (Siglent has done that). Oscilloscope firmware is one of the most complex pieces of software to create. Starting from a flexible architecture which may not offer the highest performance is mandatory IMHO.
tom66:
Yes, there's definitely something to be said for that.

I have been thinking about the current acquisition engine.  It isn't fit for the task because it naively stores samples in interleaved fashion in memory,  directly received from the ADC.  This makes reading memory out a real pain, because you have to discard unused samples or find some way to 'deinterlace' them while reading.  A better route would be to record each channel in its own buffer.  In 1ch mode, there would be only one buffer, and all data would be stored in that.  In 2ch mode the buffers would be strided by the waveform pitch (so ch1 stored first then ch2), 3/4 channel modes behave similarly.

I think supporting variable waveform lengths would be a pain however there may be a case for supporting dual timebase operations - though the 'dual timebase' section might need to be stored in blockRAM FIFOs and so be limited in memory depth.

Data can be read out in order then and processed by a filter pipeline configured by the CPU.  The data could then be written back into RAM.  The filter pipeline would be capable of performing a few basic operations on each channel (sum or multiply any pair of channels) which is possible because it can have two read pointers.  Supporting 3-4 channel operations would also be *in principle* possible but considerably more complex.

The data is still going to be ordered incorrectly for the pre-trigger so an address translator needs to start reading at the correct address, and the number of words to be read might not be a nice even multiple of 4 or 8, which poses some difficulties.

While the data that is written to RAM would be buffered by a number of FIFOs, a counter would keep track of the trigger position and state and this would be recorded into a separate area of memory and used to correct trigger positions.   Actually, the most difficult aspect of this is solving the FIFO headache, the FIFO needs a dynamically configurable width from 64 bits to 8 bits and the output width should be fixed to 64 bits, and a control channel of 4 bits needs to be passed through at the same time with identical timing.  Changing this on the fly using traditional Xilinx IP is not possible (AFAIK) so I may have to roll my own FIFO.

And the acquisition channels need to be able to discard a variable sample count for decimation modes or enable a CIC/ERES filter ...

Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod