...
Record L->L, L->R, R->L & R->R
Solve for the input and output errors of each channel and correct. At each frequency and amplitude there are 4 equations and 4 unknowns.
You can get almost arbitrary accuracy via signal processing with large datasets.
No, you can't. The issue is that harmonic's magnitude & phase (thd-n) are not constants. Doesn't matter how big data sets are, how accurate equation written or what precision of the floating math. The data becomes a junk after a few milliseconds, because ADC is drifted, always changing - there is no single static constant magnitude & phase below -100 dBc with audio ADC, or -110 with precise SAR. Same with a DAC. During debugging my project, I came to conclusion that uCPU has to continuously monitor each thd-n and make adjustment "on the fly" .
Calibration & look-up table doesn't work, actually it does for stm32f767 internal 12-bits DAC with initial start-up thd-3 about -70 dBc, LUT could fix this value to -95/-100, but than magic show begins.
I'm shocked! Truly shocked! You mean there is noise in the data? We'll have to give up!
In seismic exploration it is routine to get 40-60 dB or more improvement in the signal to noise ratio. I have seen many examples where the initial SNR was so bad that it did not appear that there was any data at all, just noise. But after some DSP the signal is 30-40 dB above the noise.
The data of interest is the reflected energy from a rock layer over 40,000 ft below the surface arriving 10-12 seconds after the source impulse was initiated. That is *only* possible because we exactly match the phase from sample to sample while summing hundreds of thousands of samples into each output sample. Every sample has a different propagation delay which has to be determined in order to sum the correct samples.
I've suggested this every time someone has come along wanting to do audio THD measurements. With a good 24 bit, 192 kHz card one should get quite spectacular results. I've got a page of mathematics floating around on which I analyzed the problem to verify that my assertions were correct.
If someone wants to do it, I'll help them over the bumps, but I'm interested in RF, not audio and am not interested in doing a grad school semester project to simply prove I can. I stopped doing that sort of thing 30 years ago.
Once you can send the integer 6e6 followed by 191,999 zeros to the sound card and record it, standard seismic software (Seismic Unix et al) can take it from there. Record a loop of [6e6,zeros(1,191999)] sent as integers to the card for 24 hours and you can suppress random noise by 49 dB. Add 6 dB for each extra day of recording. Systematic and correlated noise will take more processing, but summing 86,400 192,000 sample FFTs will get you a *very* long way towards the goal. Systematic error is handled by the L-L, L-R, R-R & R-L error solution for the input and output ADC & DAC errors. Once random and systematic error terms have been suppressed one can address the correlated noise. How hard that is is dependent upon to nature of the noise. Until someone shows me the result of the first two steps I can't say how to do that. It could easily turn into many weeks or months of work. This is *NOT* something you can do on an MCU. This requires a fast workstation with lots of memory and is only practical in recorded time. Even a many core desktop is unlikely to do it in real time.
The preceding will require three 16.5 GB data files recorded over 3 days. Straight thru, cross over and DUT to a splitter to both channels to test a single output amplitude. Each amplitude will be different. However, testing every amplitude is not practical over the range of 24 bits.
For an intro to signal processing intended for scientists and engineers:
An Introduction to Digital Signal Processing
John H. Karl
Academic Press 1989
The canonical mathematical reference on Wiener-Shannon-Nyquist analysis (the predominant techniques used):
Random Data
Bendat and Piersol
Wiley 4th ed 2010
Those cover all the work up to 2010. Work by Donoho and Candes in 2004-2008 has started another advance of equal importance. However, the computational burden is much higher, so it will not replace Wiener-Shannon-Nyquist. And at current oil prices the seismic industry has completely collapsed. So no money is being spent developing it further in the oil industry which pioneered DSP in the first place.
Have Fun!
Reg