| Electronics > Beginners |
| Why are V(rms) measurements frequency dependant? |
| << < (6/11) > >> |
| JS:
Hard to get good tracking with discrete resistor and tc becomes a problem. You don't need precise resistors, you can calibrate the gain on each range and take notes, the readout would need some correction then but tc is still a problem. JS |
| sourcecharge:
--- Quote from: Kleinstein on August 06, 2018, 07:55:32 pm ---The max4239 is not unity gain stable. For the output the unity gain stable version max4238 would be better. The problem with the max4239 at the input is having quite some noise, especially at the higher frequencies (e.g. the AZ frequency). So it would likely not make much sense to have that much amplification. The BW calculation is also a little off - it should be a little below 650 kHz. So at 150 kHz the loop gain would be somewhere around 5 and thus significant errors could start to appear. The capacitive problem is that the 20 M resistor will have some parasitic capacitance in parallel. To make the divider work well, there should be a parallel capacitive divider with the same rations. So the smaller resistors would need correspondingly larger caps in parallel. If there is some parasitic 1 pF at the 20 M , the 200 K should have some 100 pF and so on. As an additional complication the OPs input and switch will also have some capacitance, that changes with the switch setting. So to make is reasonably work in all settings the capacitance should be large compared to the load capacitance - so one has to add larger caps, including one with the largest resistor. So it may be more like 10 pF, 100 pF, 1 nF , .... The higher frequency divider would be set by the caps, not the resistors. Also a 2 M resistor will have quite some noise by it's own, which would limit the use of smaller ranges. There is a good reason, why bench DMMs usually use a 1 M resistive divider for the AC ranges. Those 20 M dividers are more made for DC. For the higher frequency isolation relays are not per se better than manual switches. For good attenuation, one usually uses more than just a single switch and avoids to send the signal to amplifiers that are not needed / used. --- End quote --- Ya, I calculated that wrong. 3x Max4239 = 3313 hz BW 2x MAX4239 = 41833 hz BW Basically, if I use the MAX4239, I would only use 2 instead of three. So, the lowest voltage measurement would be 500 uVrms with a limited BW of about 40khz but it would still output at the 50mV to 500mV levels which basically extends the digits on the Vrms of my 4.5 digit meter if the total error from the opamp network and the rms to dc converter is at the 0.1% up to 40khz Well, at least the data sheet shows 40Khz in a chart with error of mV dc (out) - mV rms (in) vs the mVrms (in) between 50mVrms to 500mVrms so if 40khz is the max of the opamps then it kinda fits with the manufacturer data. I am going the test the uCurrent with a 0.2 Vpp sinwave input through a 100 ohm resistor from 1khz to 40khz and use my scope to check the output for noise. This should be a 20uVpp input to the 1st op amp, and I should be able to see 2mVpp output. If the noise is too high, I will test it with higher Vpp inputs. What do you think? Is there a better op amp for the job to increase the BW and decrease the noise? So, the output op amp could be the unity gain opamp max4238, ya that makes sense, I remember reading that in the datasheet. Do you think different opamp is better suited for the job? Regarding the capacitance of the resistors, doesn't the capacitance in series decrease? If the 20M resistor has 1pF, it is still going to have 20Mohm in parallel. This calculates to a dissipation factor of 1 or greater up to 7958 hz, if it's 10pF than DF of 1 or > is up to 795.8 hz. DF = 1/(2*pi*F*Cp*Rp) I do have an LCR meter mastech 5308, so I could just buy one and check the capacitance. But Doesn't adding capacitance in parallel decrease the frequency at which DF changes from being completely resistive to being partly capacitive? DF = 1/(2*pi*F*Cp*Rp) The 0.01% tolerance is to limit the total error because the rms to dc converter is at 0.1% so I was thinking that in order to limit the total error, these seem to be pretty good. I've included the datasheets, and although they are pricey, they seem to have 5 ppm/C or lower. The last two resistor could be switched for 0.1% to save 50 bucks but the total error would increase as these have 15 ppm/C so they would change with temperature differently than the others. Although, the temperature change would only be from environmental conditions because they would have about 22.2M limiting the current into them. 20M USF340-20.0M-0.01%-5PPM 2M USF340-2.00M-0.01%-5PPM 200k USF340-200K-0.01%-5PPM 20k USF340-20.0K-0.01%-5PPM 2k USR2G-2KX1 200 Y1453200R000T9L I'm thinking that the opamp tolerances should be 0.05% or lower just like the uCurrent. Switch XSW8 is only on when the op amps are used, when the op amps are not used the input of the op amp must be isolated because it will damage the opamp input with high input Vrms. I don't know, I think the high resistance networks are not used in meters because of the cost, not because of the electrical characteristics. Maybe they know a better and cheaper way of doing it with op amps? |
| Kalvin:
--- Quote from: Hero999 on August 05, 2018, 08:16:34 pm --- --- Quote from: JS on August 05, 2018, 06:45:27 pm --- --- Quote from: Kalvin on August 05, 2018, 06:21:57 pm ---The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s. In order to get high signal dynamic range with sufficient room for crest factor, the ADC has to have as many bits as possible, say 16 bits with 3-4 bits reserved for crest factor (ie. for the peak values of the signal compared to the RMS of the signal https://en.wikipedia.org/wiki/Crest_factor) leaving 12 - 13 bits RMS for computation. At low signal levels the resolution will suffer due to the quantization. In order to compensate the quantization effects one may need to either increase the sample rate with oversampling or increase the number of bits of the ADC from 16 bits to 20 - 24 bits, for example, which will increase the cost of the ADC. Alternatively one may arrange the input signal level so that it will be kept as high as possible without clipping (autoranging or manual ranging) in order to get as many significant bits as possible for best accuracy and resolution. After sampling the computation is quite straight forward requiring some DSP computation. There are nice algorithms available for computing the RMS: https://www.embedded.com/design/configurable-systems/4006520/Improve-your-root-mean-calculations My guesstimate is that getting 3.75 digits for resolution is quite the practical limit with one can achieve with a signal bandwidth of > 100 kHz with a typical 16-bit ADC and optimal signal level with crest factor of 10. Probably one could achieve one extra digit with a state-of-the art, fast 24-bit ADC. One can obtain better estimation on resolution/accuracy and effects of different signal levels by performing some simulation and running mathematical/numerical analysis for the quantized signals. --- End quote --- No need for that much adc and dsp, you can do oversampling and decimation after rectification so the resolution and frequency response can be much better than what you said... You could sample at 1kHz and still get a response up to MHz if the sampling is short enough lthe adc frequency resppnae is the limit, not the sampling frequency) and after averaging you get the resolution under one LSB, useful if the ADC linearity is gpod enough but you don't need the data and computation to deal with greater ADCs. Using a pseudorandom sampling frequency makes for a better frrquency response, minimizing the comb filter at multiples of the sampling freq. JS --- End quote --- Yes, you should be able to use a lower sample frequency, than the bandwidth of the signal, because the waveform will more than likely be repeating and you want an average over a long time period, to do RMS calculations anyway. --- End quote --- Yes, you can get an estimate - probably a sufficiently good one, too - by sampling at a lower frequency. And you will get a better estimate by increasing the sampling frequency until you reach the Nyqvist limit after which the sampling has gotten all information there is in the signal. If one samples the signal at lower sampling rate, the signal should be repetitive and the sampling should be synchronized to the signal, then one should sample with some randomness or varying trigger position in order to get as much information from the signal as possible over multiple signal periods. Using sampling with constant sample period will not increase the information content as one will always sample at identical places of the waveform. For non-periodic signal one can only get an estimate of the RMS by sampling at lower rate. Summary: Sampling at lower rate will give you only an estimate of the signal to be measured where as sampling at sufficiently high rate will give you complete information of the band limited signal to measured. If the band limited signal is sampled at sufficient sample rate, the samples will comply with Parseval's theorem, and the samples will contain the complete signal representation containing all information for accurate RMS computation. |
| Kalvin:
--- Quote from: David Hess on August 06, 2018, 03:22:02 pm --- --- Quote from: Kalvin on August 05, 2018, 06:21:57 pm ---The RMS can be calculated digitally if you have a fast enough ADC = more energy required compared to analog solution due to ADC and DSP implementation. For a signal with 150 kHz bandwidth, one has to sample at least with 300 kHz - in practice somewhat faster say 500 kilosamples/second. For 1 MHz signal one should probably sample at 3 Ms/s. --- End quote --- The only thing which matters is the sampling bandwidth; the sample rate is irrelevant except for uncertainty. The RMS calculation is just the standard deviation. Reducing the number of samples does not change the standard deviation so operating the analog to digital converter below the Nyquist frequency is completely acceptable. Another way to look at it is that aliasing folds the signal over inside the Nyquist bandwidth but the standard deviation of the entire signal is still there to be measured. This can also be done in the analog domain. Use a sampler to capture the input and feed the sampler's output to a standard analog translinear RMS converter. Now the input bandwidth is limited by the sampler and not the analog RMS converter. The Racal-Dana 9301 and HP3406 sampling RF voltmeters worked this way to make RMS measurements into the GHz range. Some old analog sampling oscilloscopes had a sampling output which could be attached to a low frequency RMS voltmeter to do the same thing up to 10+ GHz and beyond. As far as the original question, most DSOs can do what is needed if their accuracy is acceptable. Just be careful because not all DSOs compute the RMS function correctly. This is likely to be a problem if they make measurements on the processed display record (this destroys the standard deviation) like the often recommended Rigol DS1000Z series. If you want to build something simple without using the sampling and standard deviation method, then I suggest using the AD637 or LTC1967 RMS to DC converter IC. --- End quote --- Sampling a band-limited bandpass signal at lower rate is called undersampling, which is quite acceptable and a common technique in the communication systems. However, the remarks I made in my previous post will still hold. |
| sourcecharge:
Well, I just checked the noise on those uCurrent boards with the AC signal, but there was too much noise to get any reading with the scope only. I never bothered to hook it up to the meter, as I was just checking for noise. After removing everything and simply using the scope on the outputs while in the mA, uA, and in the nA mode. I found both uCurrent boards while in the mA, and uA output had a noise of about 8mVpp using the 1x probe and the nA output was offset with about the same noise. I didn't bother to check the nA offset or the exact noise Vpp because I really only use the mA setting for the ability to measure current with only a 10milliohm added impedance. I measured the capacitance of the probes and they were only about 3pF. Using my bench meters, I have measures DC current with it and it was very accurate. I think I got 0.1% in relation to the current measurement from the meters, but they also have a +- 0.05% + 6 at 0.01 mV resolution with a measurement of about 240 mA. This seems reasonable that the would be 0.1% between them. I was aware of the LMV321 problem and I even got the ST version of it LMV321ILT. The DC read out from the meter has only like 0.03 mV when nothing is connected as this must be the offset. Now I'm thinking that I should measure the Vrms at 60hz with my meter because it only has an error of +- 0.5% + 30 at 60hz. Is this only my uCurrent that is doing this, or is anyone else's have 8mVpp noise? Did I get bad parts? Or is this the noise that you were saying there was? I guess if this is not a part problem then in order to measure small signals cleanly, some type of high frequency op amp with low noise would be a better solution, even if it had more of a offset voltage. Could that offset voltage be cancelled out somehow? if it could, what op amps would be best for both the input and the output? |
| Navigation |
| Message Index |
| Next page |
| Previous page |