Products > Test Equipment

Siglent SDS6000A DSO's 500MHz-2GHz

<< < (29/49) > >>

Performa01:

--- Quote from: bdunham7 on November 01, 2021, 02:29:26 pm ---
--- Quote from: Performa01 on November 01, 2021, 09:25:50 am ---The best feature of ESR is that it can be turned off.

--- End quote ---

Hardly a ringing endorsement...

--- End quote ---
It has to be the best feature, if only because I have requested it.
And the reason was … see the cons in my previous posting.


--- Quote from: bdunham7 on November 01, 2021, 02:29:26 pm ---I certainly don't want to dismiss the efforts of some very talented people at LeCroy, as I'm sure this 'feature' is of some use and I certainly don't know enough technically to pass judgement on it.  It's the marketing angle where I'm sure the sales department was giddy with excitement being able to write "10Gsa/s blah blah".  That's an area I know enough to pass judgement on, b/t/w.

--- End quote ---
I don’t know exactly how LeCroy advertise their gear, but I do know what Siglent’s strategy outside of China is.

Have a look at the international SDS6000A instruments – do you see 10 GSa/s printed on them anywhere?
Have a look at the company webpage. There it says “5 GSa/s (10 GSa/s ESR) per channel” over and over again, so clearly hinting on a genuine samplerate of 5 GSa/s. It is also clearly stated what ESR is – some form of 2x interpolation. So nobody gets fooled.
Have a look at the datasheet. It’s the same as stated above.

Sorry, I cannot see what’s wrong with the marketing. Should the ESR feature be concealed?



--- Quote from: bdunham7 on November 01, 2021, 02:29:26 pm ---As for what it does, according to your explanation, it appears that without ESR the scope only performs the sinx/x interpolation on the screen display (and to adjust the trigger point on the fly) and not the whole capture?  So then ESR is simply using sinx/x to generate points in between the actual captured points so that the measurements can be done in the way they normally are--not using sinx/x interpolation--on a greater number of points.  OK.

--- End quote ---
ESR has nothing to do with the general post processing strategies, except that it doubles the number of data points regardless of anything else.

It is only logical that there will neither be linear interpolation “x” or a sin(x)/x reconstruction if there are already enough samples to fill the screen.

So for the display, there will be a 1:1 sample mapping at a certain timebase, e.g. 20 ns/div.
Interpolation or reconstruction is used at faster timebases – where the number of samples gets less then the horizontal screen pixels.
Decimation or agglomeration is used at slower timebases.

You seem to suggest that this is somehow awkward, suboptimal or at least unusual? Just think about it:

In an extreme situation, you use the fastest timebase, which is 100 ps/div. At a sample rate of 5 GSa/s, you only get one sample per two divisions, that is five samples total. That means the interpolation or reconstruction has to provide some 1200 additional values, i.e. multiplying the initial amount of samples by 240. So there are situation, where you need to create that many additional data. Now what does that mean for the memory consumption?

Even ESR, which only doubles the number of samples already requires twice the memory for that. You probably say, for enhancing measurements on long records we do not need to multiply the data by 240 – what do we use then? 1 ps resolution requires a 200 fold expansion of the sample data – our scope then has to have either 200 times the memory (= 200 Gpts) or we leave it as it is and can only advertise it as “2.5 Mpts memory length.”

Even if we make do with just 10 ps, then it is still a factor of 20.

So it’s easy to criticize, but you should also consider the consequences.



--- Quote from: bdunham7 on November 01, 2021, 02:29:26 pm ---However, I can't quite reconcile that with the photo excerpt that I posted from LeCroy's paper.  More samples means more Gibbs ears?  As I commented there, I fail to see the improvement in that case, and since the screen display is already being processed by sinx/x, what are they actually doing?

--- End quote ---
Yes, I’m with you here.

As my screenshots demonstrate, there is no difference in the Y-t view – at least not for a rather benign signal with ~200 ps rise time. I do not have a 30-40 ps signal source, maybe I should get one and see what happens then (most likely nothing unexpected).

Anyway, you can be sure that you’ll hardly ever see a whitepaper with questionable screenshots, that create more questions than answers, from Siglent.

bdunham7:

--- Quote from: Performa01 on November 01, 2021, 03:56:27 pm ---Sorry, I cannot see what’s wrong with the marketing. Should the ESR feature be concealed?
So it’s easy to criticize, but you should also consider the consequences.

--- End quote ---

My only 'criticism' re Siglent is of the marketing, and IMO the way it is stated makes it too easy to confuse with ETS.  I would not have listed it as a headline feature like that. 

As for the technical issues, the I understand the tradeoffs in making measurements.  You could generate a huge number of interpolated points, which would use too much memory, or (in some cases--like pulse width or rise time) just do the interpolation on-the-fly locally where it is needed for each measurement, just like is done with the trigger interpolation.  That would probably use too much processor power and I have no good ideas on how to actually implement that, so there are no technical criticisms from me regarding this feature. 

And I'm just curious--why stop at 10GSa/s?  If you have a short capture, why not interpolate away until the memory is fully used, then do measurements?  You'd be able to measure pulsewidth as nicely as the trigger interpolation is able to lay successive traces right on top of one another.

Performa01:

--- Quote from: bdunham7 on November 01, 2021, 05:04:58 pm ---My only 'criticism' re Siglent is of the marketing, and IMO the way it is stated makes it too easy to confuse with ETS.  I would not have listed it as a headline feature like that. 

--- End quote ---
Well, it is the proper description of the acquisition system, listed together with other key features. ESR in parenthesis, nothing that sticks out in any way, nothing like a bigger or bolder font.

As for the possible confusion of ESR with ETS … if we cannot rely on EEs to be able to properly understand what they read, then we get some more problems indeed. ETS went largely out of fashion anyway.

Overall, we could criticize pretty much all manufacturers, because there will always be some room for confusion and/or unjustified expectations from the marketing material. At least I firmly believe that anyone can get honest and accurate information from the Siglent material, as long as they read carefully and are capable to understand what they’ve just read. I’m aware that we cannot generally presume that though…



--- Quote from: bdunham7 on November 01, 2021, 05:04:58 pm ---As for the technical issues, the I understand the tradeoffs in making measurements.  You could generate a huge number of interpolated points, which would use too much memory, or (in some cases--like pulse width or rise time) just do the interpolation on-the-fly locally where it is needed for each measurement, just like is done with the trigger interpolation.  That would probably use too much processor power and I have no good ideas on how to actually implement that, so there are no technical criticisms from me regarding this feature. 

--- End quote ---
Yes – and I can give you some good reasons why this is not really feasible…

Have you had a look at the screenshots in my earlier posting? Look at the counter in the measurement statistics for rise and fall times. You will notice that the count is vastly different in the main and zoom window. You might conclude that every single transition is analyzed separately, and a single record can provide a very high number of measurements.

So, interpolating all the transition regions in a record is certainly a lot more effort than fine-adjusting the (single) trigger point. Just for fun, let’s have a look at my example:

800 full cycles of a 160 MHz signal fit into a 5 µs long record. Each cycle has two transitions, so there is a total of 1600 transitions in a single record. But that’s not the only problem by far. Now we need to find the 10% and 90% points for our measurements. Where are they located? We do not know until we have a valid measurement result – that’s where the cat bites her own tail. We would have to analyze the entire region around the transition. How much? Well, it depends on the transition time. In theory the signal could have slow transitions, up to the point where it turns into a triangle, where we need to look at almost the entire signal period.

Long story short: We can either do it in advance, with a rather stupid algorithm, i.e. for the entire record and with massive hardware support, so that it doesn’t cost any additional time, or we do it as part of the software based measurements, and slow down the frame rate to an extent that everyone will complain very loudly.



--- Quote from: bdunham7 on November 01, 2021, 05:04:58 pm ---And I'm just curious--why stop at 10GSa/s?  If you have a short capture, why not interpolate away until the memory is fully used, then do measurements?  You'd be able to measure pulsewidth as nicely as the trigger interpolation is able to lay successive traces right on top of one another.

--- End quote ---
A good question.

Remember that ESR usually is just there, nothing configurable at all. As mentioned earlier, it was not even switchable originally.

The main reason will be the limited bandwidth of the data bus and the memory. You cannot push the sample rate up to the sky, because you need to be able to transfer that amount of data into the acquisition memory in real time. It really doesn’t pay off to fit the bus hardware and sample memory required for a high-end system with 100 GSa/s or more, just to have some interpolation that only produces more or less redundant data from a low end 5 GSa/s ADC in the end.

There is a major difference to ETS, where you take a number of consecutive records to assemble the final representation of the input signal. The bandwidth requirements here remain the same as for standard real time sampling. It just takes a number of acquisitions until a complete ETS record is built. That’s why ETS is only applicable for static signals.

edigi:
There is too much marketing blah-blah surrounding the ESR/enhanced sample rate term (and most engineers have little time to read it) so I'm trying to guess what's it really about (please correct or confirm).
Unfortunately the core of the very good description from bdunham7 touching the same essence was not really discussed.

The sinc interpolation is nothing new, in fact every modern DSO I've seen has it as an option and very useful when the timebase is so low that there as significantly more display points than samples available (for example 200ps with 5GSa/s).
For a small number of samples the micro (ARM or whatever) has sufficient power to do this (but can be done also in FPGA). This is about the visualisation of the signal and it's not about doubling the sample points but getting a "sample point" for each display pixel.

Things however get radically different if we start to talk about measurements that's supposed to operate on large number of samples (so even when the timebase is not so low) and that needs the "sinc interpolation" done in the FPGA. Due to the massively more data this means not getting hugely more virtual sample points, but some kind measurement result based on interpolated (although this may not be the best term here) data.
What is achieved with this is that the measurement accuracy gets to a comparable level with it with a 5GSa/s DSO as without it with a 10GSa/s DSO.

So for visualization it has negligible impact (in fact the 10GSa/s DSO has an edge here due to the more physical samples) and it's mainly about measurement?

Am I way off with my thinking, or kind of OK just missed something (I mean in the essence of the story I know that probably there are way more in the dirty details)?
 

Performa01:

--- Quote from: edigi on November 02, 2021, 08:22:27 am ---Unfortunately the core of the very good description from bdunham7 touching the same essence was not really discussed.

--- End quote ---
Oh? have I missed something?


--- Quote from: edigi on November 02, 2021, 08:22:27 am ---Things however get radically different if we start to talk about measurements that's supposed to operate on large number of samples (so even when the timebase is not so low) and that needs the "sinc interpolation" done in the FPGA. Due to the massively more data this means not getting hugely more virtual sample points, but some kind measurement result based on interpolated (although this may not be the best term here) data.
What is achieved with this is that the measurement accuracy gets to a comparable level with it with a 5GSa/s DSO as without it with a 10GSa/s DSO.

--- End quote ---
I'm not quite sure if I really understand your question. ESR is a technique applied very early in the digital signal processing chain. We can probably compare it with the acquisition modes Avg and ERES which process and substitute the data on their way from the ADC to the sample memory.
ESR is special in that it doubles the amount of data, so all the subsequent processing cannot tell the difference from a true ADC with twice the sample rate.
Because it has to be done in real time, all these acquisition modes need to be accomplished by hardware.

ESR gives twice the amount of data and doubles the sample rate, hence the resolution. Measurements gain accuracy, not only time measurements, but also amplitude, because peaks can be located more precisely which in turn leads to a better estimation of the peak amplitude for instance.

Measurements use the sample memory on records with more than some 1200 samples (more than fits on the screen). So the resolution of the data there is proportional to the sample rate. For shorter records, measurements use some secondary buffer with the sin(x)/x reconstruction of the waveform in it.


--- Quote from: edigi on November 02, 2021, 08:22:27 am ---So for visualization it has negligible impact (in fact the 10GSa/s DSO has an edge here due to the more physical samples) and it's mainly about measurement?

Am I way off with my thinking, or kind of OK just missed something (I mean in the essence of the story I know that probably there are way more in the dirty details)?

--- End quote ---
For visualization, it should have very little to no impact, because for longer records we don't need any additional samples for good visualization, whereas for short records it should make no difference whether we have a two-stage process in applying sin(x)/x with only doubling the number of samples first and then again in the display buffer to do the rest - or if everything is accomplished in a single run in the display buffer just once.
 

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version