Electronics > Metrology

The "double difference method" - any pointers?

(1/3) > >>

Cerebus:
I was reading a voltage reference related paper the other day that included a casual, throw away, reference to the "double difference method" in the specific context of using said method to unambiguously identify which of a collection of voltage references was producing noisy or otherwise anomalous data. All in the further context of not having an absolute reference to work from. i.e. They were developing voltage references that were, in some fashion, beyond the references they already had available.

I've tried searching for a "double difference method" that could be the one in question but have come to a dead end. I've encountered mentions of using a similarly named method for finding the epicentre of earthquakes and in the social sciences - both sources of highly noisy data - but I haven't found an unambiguous enough description to either know if I've found the right thing or to apply the method from that discipline to the original question.

Has anyone got useful pointers to a clear description of this method - either specifically in the context of "which voltage reference is lying" or more generally as a statistical method of unambiguously identifying outliers in noisy data?

The volt nuts home-brewing  voltage references can hopefully immediately see how interesting and useful this could be.

Ian

AndyC_772:
I guess that if you have three references, called A, B and C, and measure each of them in turn against one of the others, then you might get results along the lines of:

A vs B = noisy
A vs C = quiet
B vs C = noisy

In this case, A and C are both quiet, so the comparison between the two yields a quiet result too. But, since B is noisy, measuring it with respect to either of the two others yields a noisy result.

I'm not sure whether that type of comparison would actually warrant a name as such, but it seems like the obvious way to identify which is the outlier. Moreover, it seems fairly apparent that if the three references were 'quiet', 'noisy' and 'noisier', then the set of comparisons between them would show different degrees of noisiness, from which the relative merit of each reference could be readily determined.

Cerebus:

--- Quote from: AndyC_772 on February 15, 2016, 08:07:27 pm ---I guess that if you have three references, called A, B and C, and measure each of them in turn against one of the others, then you might get results along the lines of:

A vs B = noisy
A vs C = quiet
B vs C = noisy

In this case, A and C are both quiet, so the comparison between the two yields a quiet result too. But, since B is noisy, measuring it with respect to either of the two others yields a noisy result.

I'm not sure whether that type of comparison would actually warrant a name as such, but it seems like the obvious way to identify which is the outlier. Moreover, it seems fairly apparent that if the three references were 'quiet', 'noisy' and 'noisier', then the set of comparisons between them would show different degrees of noisiness, from which the relative merit of each reference could be readily determined.

--- End quote ---

The problem is that all voltage references are noisy and drift, the good ones and the bad ones. So, at any moment a good voltage reference can have wandered off positive from the 'right' value for a bit, another good reference can have wandered off negative and the crappy bad one just happens at that moment to have the 'right' value - how do you tell which is which. The answer is, obviously, long term statistics. But how do you tell - and here comes the magic word - unambiguously in the short term?

I found the paper in question again. Here's the throwaway reference:


--- Quote ---(i) The group of eight 10V standards contain three portable units which are calibrated against standards traceable to the National standard of voltage about every nine months. Four of the group of 10 V standards, selected as being those of lowest noise, are intercompared continuously by the double-difference method. This allows any noise to be unambiguously traced to the unit causing it.

--- End quote ---

The hint I have, from the other uses I found, is that 'difference' here is used in the same sense as 'differential'.

Cerebus:

--- Quote from: DiligentMinds.com on February 15, 2016, 10:28:52 pm ---I don't know if that's what they mean by "double differential" measurement, but that *IS* the proper way to measure two voltage standards against each other.  You simply repeat this measurement so that each voltage standard you have is measured against each other [N*(N-1)/2 measurements for 'N' voltage standards].

--- End quote ---

No, it's definitely "double difference" in the quote; and the apparently related statistical material I could hunt down (seismology, social science) is what provided the hint of "differential" but that's in the "derivative" sense (e.g. dv/dt or d2v/dt2) not the "differential pair" or "differential measurement" sense.

Hopefully someone will come along soon who recognises the term. I'm fairly confident it won't be a social scientist but there might be a seismologist kicking around.

quantumvolt:

--- Quote from: Cerebus on February 15, 2016, 08:50:40 pm ---
--- Quote from: AndyC_772 on February 15, 2016, 08:07:27 pm ---I guess that if you have three references, called A, B and C, and measure each of them in turn against one of the others, then you might get results along the lines of:

A vs B = noisy
A vs C = quiet
B vs C = noisy

In this case, A and C are both quiet, so the comparison between the two yields a quiet result too. But, since B is noisy, measuring it with respect to either of the two others yields a noisy result.

I'm not sure whether that type of comparison would actually warrant a name as such, but it seems like the obvious way to identify which is the outlier. Moreover, it seems fairly apparent that if the three references were 'quiet', 'noisy' and 'noisier', then the set of comparisons between them would show different degrees of noisiness, from which the relative merit of each reference could be readily determined.

--- End quote ---

The problem is that all voltage references are noisy and drift, the good ones and the bad ones. So, at any moment a good voltage reference can have wandered off positive from the 'right' value for a bit, another good reference can have wandered off negative and the crappy bad one just happens at that moment to have the 'right' value - how do you tell which is which. The answer is, obviously, long term statistics. But how do you tell - and here comes the magic word - unambiguously in the short term?

I found the paper in question again. Here's the throwaway reference:


--- Quote ---(i) The group of eight 10V standards contain three portable units which are calibrated against standards traceable to the National standard of voltage about every nine months. Four of the group of 10 V standards, selected as being those of lowest noise, are intercompared continuously by the double-difference method. This allows any noise to be unambiguously traced to the unit causing it.

--- End quote ---

The hint I have, from the other uses I found, is that 'difference' here is used in the same sense as 'differential'.

--- End quote ---

As you quote some 'scientific' paper here - wouldn't it be scientific of you to name the source. Could help you and be useful for others ...

Anyway, here's your stuff https://en.wikipedia.org/wiki/Difference_in_differences


This qoute from http://web.worldbank.org/WBSITE/EXTERNAL/TOPICS/EXTPOVERTY/EXTISPMA/0,,contentMDK:20188244~menuPK:384339~pagePK:148956~piPK:216618~theSitePK:384329,00.html#difference gives a hint on how to use the method:

'Double difference or difference-in-differences methods
An estimation method to be used with both experimental and non-experimental design

Double difference or difference-in-differences methods compare a treatment and a comparison group (first difference) before and after the intervention (second difference). This method can be applied in both experimental and quasi-experimental designs and requires baseline and follow-up data from the same treatment and control group.

A baseline survey is conducted for the outcome indicators for an untreated comparison group as well as the treatment group before the intervention followed by a follow-up survey of the same sampled observations as the baseline survey after the intervention. If the sampled observations tend to differ in the follow-up survey from the baseline survey, then they should be from the same geographic clusters or strata in terms of some other variable.

The mean difference between the “after” and “before” values of the outcome indicators for each of the treatment and comparison groups is calculated followed by the difference between these two mean differences. The second difference (that is, the difference in difference) is the estimate of the impact of the program (A special case of double differences is “reflexive comparison” that only compares the treatment group before and after the intervention).'

Navigation

[0] Message Index

[#] Next page

There was an error while thanking
Thanking...
Go to full version