... 32-bit data and 192kHz sample rates have notable benefits in the studio, but the same rules don’t apply for playback. ...
I thought at after 21 or 22 bits anything smaller was just noise. 32 bits sounds like fairy tales. Or has analog technology advanced to that point while I wasn't looking?
For end consumers, there is little reason to transport audio at more than 16 bit and to replay at more than 24 bit, even with digital volume control considered.
Say you're playing a 16-bit audio signal on a 24-bit DAC. From one 16-bit sample to the next, say there is a 1 LSB (relative to 16-bit) step change. Could you use your 24-bit DAC to output samples at 256x the 16-bit clock rate and linearly (or otherwise) interpolate between 16-bit samples to ease the post DAC filtering requirements? Is that what oversampling is?
Oversampling is doing the interpolation, but no need to use additional resolution for this. It is enough to have the same, in the extremes even slightly reduced resolution.
AFAIK oversampling was used from the very beginning of the CD in the first phillips player and likely also in the Sony version.
Oversampling is doing the interpolation, but no need to use additional resolution for this. It is enough to have the same, in the extremes even slightly reduced resolution.
AFAIK oversampling was used from the very beginning of the CD in the first phillips player and likely also in the Sony version.
Actually, that was the first divergence. Philips felt that they could only achieve 14 bits with decent linearity at launch, they introduced 4x oversampling as a form of compensation for this. Sony launched with 16 bits and no oversampling. The Philips players were generally regarded to sound much better as the oversampling pushed the required post-DAC analogue filtering well above the audio frequency range. Sony had to use a brick-wall analogue filter which caused lots of audible phase and amplitude errors in the audio passband.
By the next generation, Philips had perfected decent 16 bit DACs but retained the 4x oversampling for the more relaxed filtering requirement.
A bit later, before the 1-bit DACs, a few cheap designs came out with a DAC running at 88.2kHz. Some switching logic and a 4066 on the output side. Because two DACs was too expensive. I've got an old schematic here somewhere. I'll have to upload it sometime. You look at the output section and go
Modern Audio DACs are essentially all a kind of sigma delta DAC and this kind of does faster ouput and filtering anyway. So the modulation and oversampling for the DAC may be kind of combined. Even if the DAC externally looks like 24 Bits, it may internally do 8 Bits with massive oversampling and some filtering.
* If you're dealing with 32-bit samples, a true 32-bit DAC is likely going to yield a better SNR than a 24-bit fed with samples down-quantized from 32-bit to 24-bit;
* If you're dealing with 32-bit samples, a true 32-bit DAC is likely going to yield a better SNR than a 24-bit fed with samples down-quantized from 32-bit to 24-bit;
Do you have any examples where that is the case and it is not just marketing? TI's (Burr-Brown) highest SNR audio ADCs and DACs are 24 bits and have a higher claimed SNR than the "32-bit" converters from AKM.
1. I am assuming a 32-bit DAC and a 24-bit DAC with the same SNR.
2. I am explicitly talking about dealing with 32-bit samples, which, if it was not clear enough, means samples that are natively generated as 32-bit numbers.
3. If you have 32-bit samples, and feeds a 24-bit DAC, you have to down-quantize. Merely truncating creates truncation distortion which is rather nasty. The usual way of mitigating this is to *dither*, which basically is adding some random noise to the signal. It's a trade-off. Usually less annoying than pure distortion, but added noise nonetheless. Then end-result will be noisier than the 32-bit samples directly fed to a 32-bit DAC with the same SNR.
4. Of course, if you're dealing with 24-bit samples, using a 32-bit DAC wouldn't make sense. Actually, you'd get worse results too since you'd have to up-quantize here, which would inevitably make the signal noisier, even if just a little.
5. All in all, it's best to use a DAC with the same resolution as the samples you're feeding it with. Any change of quantization WILL add noise.
We are talking about the actual hardware that does the DAC. I don't think you can reasonably make those. The exponent factor needs hardware analog multiplier to implement, and making a linear enough multiplier is harder than it seems to be.
The reason why people push 24 bit is for volume control. You see, analog things are expensive, they don't scale with Moore's law. You want a system to be mainly digital, so that they can scale and you get to save money. If your intended SNR is 98dB (CD quality), then you can use the rest 48dB to do volume control, and that is actually a large range.