Author Topic: How much Time to wait until I am sure, i have sampled the right value  (Read 2907 times)

0 Members and 1 Guest are viewing this topic.

Offline GeorgiTopic starter

  • Contributor
  • Posts: 19
  • Country: de
Hello everyone, I am new  in the forum, but I have been following Dave's videos for a while. I am currently working as a student for one company in Germany and I am testing the telemetry systems. I am measuring the frequency response of the Digital and Analog Output and then process the data in a Graph, which gives me the percentage error on the y-ax, and the frequency on the x-axis. The same as a Bode - plot, but I am using percentage, not dB. I am using a function generator to sweep the frequency from 1 to 1000 Hz in 100 seconds. I measure the Output (first analog, then digital) and then I process the data with a special software developed by the company. The way I compute the percentage error is the following (I tried different other methods, this one just happened to work better than the others): I take the measured data and then I put a time window, which slides over the function of the output and computes the maximum every couple of seconds. Then I compute the percentage error with the help of  a couple of virtual channels and I plot all the data over the frequency axis.  When I am using this method to compute the analog output of the system, I get the results I expect  At 1khz we have about 0.7% lower value, than the nominal, and we have about 0.05% tolerance, the problem comes when I measure the digital output. (The way  I measure it is the following, I do 5 measurements and then plot then in one Graph. This way I can see the tolerance). By the measurement of the digital output according to the engineers, that have developed the ADC  it is supposed to have zero tolerance, the five measurements should be exactly the same. The problem is that they aren't. We have a 0.05% tolerance at about 200hz. I and my colleagues think, that the system doesn't have enough time to sample all the samples by a given frequency, because the sweep is too fast. We are not sure that we have sampled a value that is close enough to the real maximum of the Signal. We are thinking of doing a statical sweep (for example, we measure the value every 10hz, and we stay at a specific frequency for a specific period of time).  My question is the following: Is there a mathematical way to calculate how much time do I have to stay by a specific frequency (when I sweep), to be sure, that I sampled  the most possible values. I know I will never have the real maximum, but I want to be in a specific tolerance (0.1%) I know that this can't be done by frequencies that are 2,3,5 times smaller than the sampling frequency, because we always sample at the same place, but by frequencies that are for example 2.3 times smaller than the sampling frequency we need to sample more than one period to get all the possible values.
Thanks in advance, and if you have any problems understanding what i mean, just ask one more time, I will be more than happy to explain myself.
Georgi
 

Offline ignator

  • Regular Contributor
  • *
  • Posts: 206
  • Country: us
Why are you sweeping this?

Just what are you measuring the output of? If your sweeping too fast, then slow down, or automate the system to step the frequency, sample the output, then step again until your full bandwidth is complete. Don't assume that because someone told you to do it the way you are, that it is the correct method, as you are finding issues.
Does this telemetry have digitization sampling errors?
Is it your intent to measure this error between frequency change and measured telemetry transmission? Are you trying to determine settling time?
I'm confused what you are trying to do, but your indicating a measurement issue.
 

Offline GeorgiTopic starter

  • Contributor
  • Posts: 19
  • Country: de
This measurment system measures Torque. The sweeping is done to simulate a Torque that is not constant in the time, and its frequency changes over time too. The system needs to be testet how it reacts to different torque frequencies, and what error it is producing, now the engineers, that have designed the system claim the digital output will have 0% tolerance. (That means, that when i measure the system 5 times, i will get the same result each time, and we wont have anny oscillation in the error-Graph).  That what i measure is not 0% tolerance, but 0.05% tolerance by 200hz. Now we want to ferify first that our measurments are right before jumping into any other conclusions, that is why I am asking is there a mathematical method to calculate exactly how much slower do i have to sweep, in order to have the most possible sample by a frequency. My sampling frequency is 38,4 Khz.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6720
  • Country: nl
Please use more line breaks/paragraphs. Install English dictionary in your browser and let it spell check your post.

What's the maximum resolution of the measurement system? Does the ADC have an anti-aliasing/low-pass filter?
 

Offline GeorgiTopic starter

  • Contributor
  • Posts: 19
  • Country: de
Sorry for that I was typing fast, and didn't pay much attention. the ADC has 16 bits of resolution. Yes it does have  an anti-aliasing/low-pass filter but it is switched off. I just need to know if there is a way to calculate how much time i need to stay by a spesific frequency to sample all the Information possible. Right now I do not need to look for the cause for this results.
 

Offline ignator

  • Regular Contributor
  • *
  • Posts: 206
  • Country: us
Is your function generator that is doing a 'smooth' sweep actually doing small steps? This would induce an acceleration component to the torque signal.
If you are supposed to have 5 samples that all agree, what is this sample rate? Clearly 1000Hz in 100 seconds is 10Hz/sec, if your samples are not within a few milliseconds total for all 5 then your not going to get the same reading. And you can't expect zero quantization error or zero noise on the LSBs of the A/D.

I just recall back in school a MSEE student doing his masters work on spectrum analyzers, and that sweeping a band pass filter induces measurement error, something to do with Gaussian math, but I do not recall the details at 30+ years down range.
 

Offline atferrari

  • Frequent Contributor
  • **
  • Posts: 314
  • Country: ar
Hello everyone, I am new  in the forum, but I have been following Dave's videos for a while. I am currently working as a student for one company in Germany and I am testing the telemetry systems. I am measuring the frequency response of the Digital and Analog Output and then process the data in a Graph, which gives me the percentage error on the y-ax, and the frequency on the x-axis. The same as a Bode - plot, but I am using percentage, not dB. I am using a function generator to sweep the frequency from 1 to 1000 Hz in 100 seconds. I measure the Output (first analog, then digital) and then I process the data with a special software developed by the company. The way I compute the percentage error is the following (I tried different other methods, this one just happened to work better than the others): I take the measured data and then I put a time window, which slides over the function of the output and computes the maximum every couple of seconds. Then I compute the percentage error with the help of  a couple of virtual channels and I plot all the data over the frequency axis.  When I am using this method to compute the analog output of the system, I get the results I expect  At 1khz we have about 0.7% lower value, than the nominal, and we have about 0.05% tolerance, the problem comes when I measure the digital output. (The way  I measure it is the following, I do 5 measurements and then plot then in one Graph. This way I can see the tolerance). By the measurement of the digital output according to the engineers, that have developed the ADC  it is supposed to have zero tolerance, the five measurements should be exactly the same. The problem is that they aren't. We have a 0.05% tolerance at about 200hz. I and my colleagues think, that the system doesn't have enough time to sample all the samples by a given frequency, because the sweep is too fast. We are not sure that we have sampled a value that is close enough to the real maximum of the Signal. We are thinking of doing a statical sweep (for example, we measure the value every 10hz, and we stay at a specific frequency for a specific period of time).  My question is the following: Is there a mathematical way to calculate how much time do I have to stay by a specific frequency (when I sweep), to be sure, that I sampled  the most possible values. I know I will never have the real maximum, but I want to be in a specific tolerance (0.1%) I know that this can't be done by frequencies that are 2,3,5 times smaller than the sampling frequency, because we always sample at the same place, but by frequencies that are for example 2.3 times smaller than the sampling frequency we need to sample more than one period to get all the possible values.
Thanks in advance, and if you have any problems understanding what i mean, just ask one more time, I will be more than happy to explain myself.
Georgi

Paragraphs. Could you?
Agustín Tomás
In theory, there is no difference between theory and practice. In practice, however, there is.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf