Author Topic: Audio sample rate and latency in live sound.  (Read 31562 times)

0 Members and 1 Guest are viewing this topic.

Offline Dan MoosTopic starter

  • Frequent Contributor
  • **
  • Posts: 357
  • Country: us
Audio sample rate and latency in live sound.
« on: January 15, 2022, 11:16:43 pm »
I'm in a discussion on an audio engineer's group. The majority are stating that audio sampled at 96k will result in lower latency through the system compared to 48k samples. 

The biggest reason they state is that the dsp is waiting for a specific number of samples before it can work,  and thus, 96k audio "fills the buffer faster".

To me, this sounds like a fundamental misunderstanding of what "sample rate means". It also implies that DSPs are LESS efficient with LESS data, which also doesn't sound right to me. 

But they are claiming the specs on their equipment specifically state less latency at 96k vs 48k. I have no reason to not believe them, so now I'm confused.

Anyone with inside knowledge wanna help me understand?
 

Offline John B

  • Frequent Contributor
  • **
  • Posts: 818
  • Country: au
Re: Audio sample rate and latency in live sound.
« Reply #1 on: January 15, 2022, 11:48:57 pm »
Sounds about right. For computer + interface setups you have an input sample buffer + processing time + output sample buffer. Faster sample rate represents a smaller period of time.
 

Offline Dan MoosTopic starter

  • Frequent Contributor
  • **
  • Posts: 357
  • Country: us
Re: Audio sample rate and latency in live sound.
« Reply #2 on: January 15, 2022, 11:58:56 pm »
We are talking about live, real time stuff.

So DSPs really wait for X number of samples before going to work? Regardless of how fast they are coming? So a native 48k system wouldnt have this latency issue?

 

Offline Ed.Kloonk

  • Super Contributor
  • ***
  • Posts: 4000
  • Country: au
  • Cat video aficionado
Re: Audio sample rate and latency in live sound.
« Reply #3 on: January 16, 2022, 12:10:40 am »


The biggest reason they state is that the dsp is waiting for a specific number of samples before it can work,  and thus, 96k audio "fills the buffer faster".


The problem with the story is that you normally set a buffer size for a duration not necessarily a arbitrary number of samples. The reason memory buffers are used at all is (usually) the CPU and the IO speeds don't always align and the buffer helps alleviate the mismatch of timings.

With DSP's however, they do need a sample packet to examine the incoming waveform. Though the newer tech needs much less and can do much more.

My question is: if you can fit enough samples in a 96k buffer to complete the task without problems, why then can't a 48k buffer do the same work in the same amount of time?

Perhaps the software is configured wrong or your ppl aren't fully understanding the point of buffering.
iratus parum formica
 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22362
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Audio sample rate and latency in live sound.
« Reply #4 on: January 16, 2022, 12:12:53 am »
More or less an undefined problem as it could simply be whatever.  And likely a concern not rooted in reality, as delays of some ~ms are very difficult to perceive in the first place.

On multitasking OS, typically the buffers will be sized for some ms.  And however many samples fit in there, is whatever fits in there.  It could be 1024 samples, it could be 1 or 10ms...  It could even be buffered in multiple places (input buffers, multiple stages of sound system, output buffers..), impossible to tell without insight into all the drivers involved.  Not to mention format and rate conversion; the internal representation might be a different sample rate, floating or fixed point, etc.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15250
  • Country: fr
Re: Audio sample rate and latency in live sound.
« Reply #5 on: January 16, 2022, 12:16:00 am »
We are talking about live, real time stuff.

So DSPs really wait for X number of samples before going to work?

Yes of course, it's always buffered. Both transfering AND processing samples one at a time instead of by packets is highly inefficient.

You basically have a double-buffering scheme. While one buffer is being transfered with data, the other is being processed - same for the output, but in reverse.
The smaller the buffer size, and the higher the overhead, to the point of spending more processing time in overhead than actual processing.

A purely hardware implementation - such as on FPGA - could potentially be designed to be able to process one sample at a time, but for any software-based solution, that's not practical.
 

Offline John B

  • Frequent Contributor
  • **
  • Posts: 818
  • Country: au
Re: Audio sample rate and latency in live sound.
« Reply #6 on: January 16, 2022, 12:23:11 am »
We are talking about live, real time stuff.

So DSPs really wait for X number of samples before going to work? Regardless of how fast they are coming? So a native 48k system wouldnt have this latency issue?

There is no "real time" in this case, there is always a processing time. Even digital mixers, audio FX etc will list a latency time. Somewhere around 2ms.

My computer and interface setup achieves around 6ms latency, so totally suitable for real time usage.
« Last Edit: January 16, 2022, 12:24:58 am by John B »
 

Online Someone

  • Super Contributor
  • ***
  • Posts: 4904
  • Country: au
    • send complaints here
Re: Audio sample rate and latency in live sound.
« Reply #7 on: January 16, 2022, 12:29:49 am »
So DSPs really wait for X number of samples before going to work? Regardless of how fast they are coming? So a native 48k system wouldnt have this latency issue?
Its not inherent to DSPs, but rather the algorithm implemented. A lazy programmer might make it work for 96k (with some fixed buffer/block depth) and then for 48k just rescale the parameters, at which point yes, a specific device might have more latency at 48k than at 96k.
 
The following users thanked this post: Siwastaja

Online Someone

  • Super Contributor
  • ***
  • Posts: 4904
  • Country: au
    • send complaints here
Re: Audio sample rate and latency in live sound.
« Reply #8 on: January 16, 2022, 12:35:23 am »
A purely hardware implementation - such as on FPGA - could potentially be designed to be able to process one sample at a time, but for any software-based solution, that's not practical.
Sounds like an answer from someone who hasn't used DSPs, they are entirely suitable for low latency sample-by-sample processing. Short pipelines with  deterministic IO and memory access differentiating them from general purpose processors.
 

Offline jonpaul

  • Super Contributor
  • ***
  • Posts: 3553
  • Country: fr
Re: Audio sample rate and latency in live sound.
« Reply #9 on: January 16, 2022, 05:31:42 pm »
Bonjour, was design and consulting on RT SA, digital audio interfaces  in 1970s...1990.

a few precisions on the discussion, as seems some confusion:

Standard FS rates are 44.1, 48, 96, 192 and 384 kHz. The rates have no direct connection to latency.

Real time mean the rate of flow of information is same at the system input and output regardless of latency.

The latency depends on the processing time and DAC/ADC latency, plus any buffers. The time will be proportional to the clock period, so in the same system as you change clock freq of the entire system, the latency will change proportionally. As must DSP operate on a fixed clock and the ADC/DAC on multiple of FS the calculation of latency is a bit more difficult.

Main benefit of FS over 48 kHz is use of gentle LPF (instead of "brick wall") to avoid aliasing, eg oversampling FIR digital filters used in oversampling ADCs.

Hope this is interesting,

Bon Chance,

Jon

Jean-Paul  the Internet Dinosaur
 
The following users thanked this post: Bassman59

Offline mag_therm

  • Frequent Contributor
  • **
  • Posts: 783
  • Country: us
Re: Audio sample rate and latency in live sound.
« Reply #10 on: January 16, 2022, 06:49:10 pm »
We have some issues here.
1)
I have a vintage receiver set up with a demodulator I just built for FT8 , feeding via the hardware pch card
 to the wsjt-x app. Latency is no problem for ft8 !

But I  use same set-up for short wave listening phone (ssb voice) on 20 metre etc bands.
The calf audio filter sent to playback outputs is quite effective for the QRN (noise) on 80m and 20 m
The audio is so slow that it impedes the ability to manually tune in a ssb station.
Receiver's hardware audio is off.
here is the linux desktop set up with jackctl:
https://app.box.com/s/j7et0nz1dtqj22jlpsmj1kn1ninyk3xv

2)
My son , 16 , is a budding musician.
We tried to do a multi-track recording to Audacity using an internet rhythm track.
But latency problem, maybe need 2 computers, one feeding the off-internet into the mixer?
I read (search) that musicians can't tolerate latency. One mentions about 8 ms, others mentioned lower threshold.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8750
  • Country: fi
Re: Audio sample rate and latency in live sound.
« Reply #11 on: January 16, 2022, 07:08:25 pm »
+1 to "it depends". There is no such fundamental rule at all, but do remember that the guys claiming that might have practical experience about some certain products that behave the way they suggest. "Programming laziness" would be a good description. After all, you can pretty much achieve the same latency regardless of the sample rate, but that might require a tad of extra work compared to just minimizing latency on one (recommended) sample rate and then make the rest more or less "just work somehow".

Buffers are needed to do processing in chunks without audible glitches, and this is all measured in time. And as you say in opening post, doing the same processing at lower fs is actually faster, so one could theoretically reduce the chunk length in time; i.e., quadratically reduce the chunk length in number of samples. But OTOH, some lazy design could use fixed buffer size in number of sampled, not time.
« Last Edit: January 16, 2022, 07:11:18 pm by Siwastaja »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9370
  • Country: gb
Re: Audio sample rate and latency in live sound.
« Reply #12 on: January 16, 2022, 07:11:18 pm »
How long does it take a sample to come out of an ADC? In the case of something like a SAR ADC is usually one sample time. However, all the really good audio ADCs are sigma-delta types. Those have considerable latency. The basic conversion process gives you several samples of delay, but that's the tip of the iceberg. If you want a fairly quick result from a sigma delta converter to have to accept a heavily rolled off frequency response. If you want a flat frequency response, as in audio, you need a further stage of filtering, which causes many samples of additional delay in the result. Try looking at a range of sigma delta converters, and you will usually see 2 types - low latency, rolled off response ones, suitable for applications where low latency is essential, and flat response high latency. Whichever type you get, the latency is way more than half a 48ksps sample, so using 96ksps is irrelevant to the overall delay.

I've talked with chip designers who have looked at making a single sigma delta converter chip with both a low latency rolled off response, and a high latency flat response type of output. Lots of applications require that. There are many industrial (typically power) applications where a device need to provide a low latency protection signal, while also making precise wideband measurements. I don't know if any have come to market.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1321
  • Country: de
Re: Audio sample rate and latency in live sound.
« Reply #13 on: January 16, 2022, 07:12:47 pm »
My son , 16 , is a budding musician.
We tried to do a multi-track recording to Audacity using an internet rhythm track.
But latency problem, maybe need 2 computers, one feeding the off-internet into the mixer?
I read (search) that musicians can't tolerate latency. One mentions about 8 ms, others mentioned lower threshold.

AFAIK, you can adjust latency in Audacity (and in other multi-track recording programs), in order the the tracks being played back and the newly recorded track are in sync then.
 
The following users thanked this post: mag_therm

Offline gf

  • Super Contributor
  • ***
  • Posts: 1321
  • Country: de
Re: Audio sample rate and latency in live sound.
« Reply #14 on: January 16, 2022, 10:50:53 pm »
...the latency is way more than half a 48ksps sample, so using 96ksps is irrelevant to the overall delay.

I looked at the datasheet of an arbitrarily chosen audio ADC.
It certainly does make a difference whether we get a latency of 17.1 samples @48kSa/s = 356µs, or 7.2 samples @384 kSa/s = 19µs (both with the regular linear phase decimation filter).
These numbers do not confirm that a higher sample rate were irrelevant. At 384 kSa/s, the cut-off frequency of the decimation filter is simply 8x higher, which enables a filter with significantly lower group delay. But the whole real-time processing chain needs to run at the higher sample rate then. Any decimation to a lower sample rate in the procesing chain introduces additional delay due to the required decimation filter, which defeats the latency advantage of the higher ADC sample rate again.

How long does it take a sample to come out of an ADC? In the case of something like a SAR ADC is usually one sample time.

Don't forget the group delay of the analog anti-alising filter which is required in front of the SAR ADC. Again, this group delay depens on the filter cut-off frequency dictated by the sample rate. At the end you have to spend latency either for the analog anti-alising filter, or for the digital decimation filter in the Sigma-Delta ADC. There is no free lunch.
« Last Edit: January 16, 2022, 11:04:50 pm by gf »
 
The following users thanked this post: Someone

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9370
  • Country: gb
Re: Audio sample rate and latency in live sound.
« Reply #15 on: January 16, 2022, 11:01:34 pm »
...the latency is way more than half a 48ksps sample, so using 96ksps is irrelevant to the overall delay.

I looked at the datasheet of an arbitrarily chosen audio ADC.
It certainly does make a difference whether we get a latency of 17.1 samples @48kSa/s = 356µs, or 7.2 samples @384 kSa/s = 19µs (both with the regular linear phase decimation filter).
These numbers do not confirm that a higher sample rate were irrelevant. At 384 kSa/s, the cut-off frequency of the decimation filter is simply 8x higher, which enables a filter with significantly lower group delay. But the whole real-time processing chain needs to run at the higher sample rate then. Any decimation to a lower sample rate in the procesing chain introduces additional delay due to the required decimation filter, which defeats the latency advantage of the higher ADC sample rate again.
Even if you don't decimate, and run the processing at 384ksps, most filtering introduces a substantial latency. People love using massive impulse response filters, and those can introduce considerable latency. Causality is a harsh mistress.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1321
  • Country: de
Re: Audio sample rate and latency in live sound.
« Reply #16 on: January 16, 2022, 11:11:27 pm »
Even if you don't decimate, and run the processing at 384ksps, most filtering introduces a substantial latency. People love using massive impulse response filters, and those can introduce considerable latency. Causality is a harsh mistress.

Indeed. But then you can blame the particular "processing" steps >:D. A simple mixer (w/o any tone controls) would not harm 1). But OTOH, people which complain about latency of digital filters do forget that analog filters don't come without group delay either.

Edit: 1)...granted that the clocks of all sources are in sync (which is hopefully granted in a studio environment). ASRC had again some overhead.
« Last Edit: January 16, 2022, 11:24:53 pm by gf »
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6921
  • Country: nl
Re: Audio sample rate and latency in live sound.
« Reply #17 on: January 17, 2022, 12:07:15 am »
It's not like DAWs work on real time operating systems with 1 cycle or less latency ... how is it relevant except for stuff like stomp boxes? Even then, the musician isn't going to notice the difference.

A massive FIR reverb with <1 cycle algorithmic latency for output from the the first sample of the impulse can be implemented by anyone now, with a small FIR filter and FFT based filters increasing geometrically in size, the important Lake DSP patents have all expired.
« Last Edit: January 17, 2022, 12:12:18 am by Marco »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9370
  • Country: gb
Re: Audio sample rate and latency in live sound.
« Reply #18 on: January 17, 2022, 05:51:53 pm »
Even if you don't decimate, and run the processing at 384ksps, most filtering introduces a substantial latency. People love using massive impulse response filters, and those can introduce considerable latency. Causality is a harsh mistress.

Indeed. But then you can blame the particular "processing" steps >:D. A simple mixer (w/o any tone controls) would not harm 1). But OTOH, people which complain about latency of digital filters do forget that analog filters don't come without group delay either.
Of course. Buffering delays are unique to a digital implementation, but filter delays are baked into the maths. It doesn't matter if the implementation is analogue or digital.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15250
  • Country: fr
Re: Audio sample rate and latency in live sound.
« Reply #19 on: January 17, 2022, 06:23:14 pm »
Of course, delays are inevitable. That's basic physics.

As to digital audio, which is discrete, latency will always be a multiple of the sampling period. So that's the link between latency and sampling rate, the rest depending entirely on the implementation, of course. Obviously, a given latency in a given number of samples will be shorter if the sample rate is higher, but that ends there.

On computer audio, it was very common to have either a fixed buffering in number of samples, or user-selectable, but only among a limited number of options. For instance, old DAW software frequently had minimum buffer sizes of 256 samples, or so. In this case, it's obvious the minimum latency would be reduced if sampling rate was higher. Modern audio software on modern OSs have much better latency usually, due to the audio subsystems in general-purpose OSs being much better than they used to, scheduling being better, data throughput being higher, and so on. So the point holds a lot less these days - the limiting factor will be the inherent latency in the OS scheduling, and data throughput, not the buffer sizes per se.

And of course that's just about the latency of audio without any processing. Further processing can add additional delays, thus "latency".

But in the extreme case for which you can achieve a one-sample latency only, then the sampling rate will definitely dictate the latency. In all other cases... it just depends.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9370
  • Country: gb
Re: Audio sample rate and latency in live sound.
« Reply #20 on: January 18, 2022, 11:51:31 am »
As to digital audio, which is discrete, latency will always be a multiple of the sampling period. So that's the link between latency and sampling rate, the rest depending entirely on the implementation, of course. Obviously, a given latency in a given number of samples will be shorter if the sample rate is higher, but that ends there.
Where did you get that idea? The latency of many ADCs works out to be something and a half samples, due to the way the comb filtering works. Depending on the nature of the response flattening filter, it might be any fraction, but something and a half is quite common.
 

Offline Bassman59

  • Super Contributor
  • ***
  • Posts: 2501
  • Country: us
  • Yes, I do this for a living
Re: Audio sample rate and latency in live sound.
« Reply #21 on: January 18, 2022, 05:34:02 pm »
As to digital audio, which is discrete, latency will always be a multiple of the sampling period. So that's the link between latency and sampling rate, the rest depending entirely on the implementation, of course. Obviously, a given latency in a given number of samples will be shorter if the sample rate is higher, but that ends there.
Where did you get that idea? The latency of many ADCs works out to be something and a half samples, due to the way the comb filtering works. Depending on the nature of the response flattening filter, it might be any fraction, but something and a half is quite common.
This is correct! Look at the data sheet for, say, the PCM4202 ADC. It specifies group delay as 9.5/fs seconds. This tells us that the filter delay is fixed and the absolute time through it scales with sample rate. At 48 kHz the delay is 198 microseconds. At 192 kHz the delay is 49.5 microseconds.

This latency is, of course, utterly swamped by acoustic times-of-flight and is certainly minimal when compared with processing latency. Of course it does matter if you have multiple coherent sources, perhaps like two microphones in X/Y capturing "stereo."
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17108
  • Country: us
  • DavidH
Re: Audio sample rate and latency in live sound.
« Reply #22 on: January 20, 2022, 01:39:40 pm »
This is correct! Look at the data sheet for, say, the PCM4202 ADC. It specifies group delay as 9.5/fs seconds. This tells us that the filter delay is fixed and the absolute time through it scales with sample rate. At 48 kHz the delay is 198 microseconds. At 192 kHz the delay is 49.5 microseconds.

That is because the fixed length filter tracks the Nyquist frequency for noise shaping and anti-aliasing.  If the filter was tied to a fixed frequency, then doubling the sample rate would require twice as many samples to be processed and the absolute latency would not change.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15250
  • Country: fr
Re: Audio sample rate and latency in live sound.
« Reply #23 on: January 20, 2022, 06:47:12 pm »
Absolutely. But in common sigma-delta ADCs, that's proportional to the sampling period as show above. So, one can argue until the cows come home, but the point was to explain the OP in simple concepts in what ways "latency" and sampling rate are related in typical systems. (And in what ways they wouldn't be.)

(Note that in sigma-delta converters, the sampling rate for the modulators and decimation filters is a multiple of the output sampling rate - oversampling.)

Then there's the point of defining "latency" in a *discrete* system, and equating it to the group delay is an interesting debate. Group delay can be fractional (but still proportional to the sampling period for a *given* filter structure), but you can't get a sample before it's ready to be read. So that opens a can of worms depending on your definition of latency and, from there, what you are going to use this figure for. That point just to open the question of defining "latency" in a discrete system.
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1321
  • Country: de
Re: Audio sample rate and latency in live sound.
« Reply #24 on: January 20, 2022, 07:02:05 pm »
Group delay can be fractional (but still proportional to the sampling period for a *given* filter structure)

And it can be constant (-> linear phase FIR filter), or frequency-dependent (-> IIR filter or analog filter).
 

Offline Jr460

  • Regular Contributor
  • *
  • Posts: 142
Re: Audio sample rate and latency in live sound.
« Reply #25 on: January 20, 2022, 08:12:38 pm »
Some real world stuff.

96k sample rate for live sound to me seems silly.   Yes you can do, but why.   

Go into a studio where you may be using all kinds of plug-ins to process, and some of them work a bit better, sound better, at 96k.   And you are making something the best you can that has to stand up as good mix for the next 30+ years, a different level of care compared to live with no do-overs, additional takes, or tweak the mix.

I've never really heard anything better at 96k.  44.1 (CD sample rate) to 48k, yea I can hear that.   Most stuff I've seen defaults to 48k.   Also bit depth, not sure the best point, but 16bits normally doesn't cut it for me, 24 is better.   I can't hear a difference at 32bits.   So for me, I run my systems at 48k/24bit.

Let's talk about what a delay of same 1ms means.   In air that is about 1 foot.   10 feet for 10ms, that should not mess up the groove of good players.  I've see/heard what happens when the bass player has a wireless and runs way out into crowd.  The groove/feel goes all to hell.  ( but it looked great him and the lead singer that found the stairs to platform for the big video wall looking over the block ).

That same 10ms is deadly if your have people with in-ear monitors.   Why?  If you sing, you hear youself two ways, one bone conduction and the other from the ear piece.   When they are close to same level, unlike if you have a floor wedge, and the relative delay is not changing, you get comb filtering.   If you have floor wedge, just a small movement of your head and the comb shifts and your brain averages that all out, not so with in-ears.

My main board is rated at 1.5ms from an analog input, converters, processing, output convertors, and to an analog output and runs at 48k.   I have a smaller older board I can put into 96K rate, but it changes none of system delays.   BUT at 96k, all my built in effects are cut in half.   That a deal killer for me.

Even if I may have bad ears and your blind A/B testing says to you that 96k sounds better, OK.   But in a live situation you are going to have more trouble getting source captured well.   You are not using the same mics as in a studio ( yea, take the nice sounding ribbon outside in gusts of wind), and you are not going to have a good sounding room and good isolation.  Big stadium, amp cabinets under stage in iso boxes, yea, you are getting closer.   Now how about the mic pre-amps and converters, are they up to snuff?  96k is not going to make what was captured with issues any better.

If your overall latency is a couple of ms from singer mic to their in-ears, then leave it alone.   Even if your board cuts it in half at 96k, they are not going to hear it.   Sure you still get a comb filter, but now it is up much higher where they have lost most of that range from standing to close to drummer that smacks the brass so hard they break sticks.   No in-ears, than forget it, you can go with long delays from your Smackie or Bear-ringer board.

(Note, if you are using Dante as the transport to/from the mixer, you can set/see the guaranteed larceny, and control it by the using less hops/switches)

My $.02
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9370
  • Country: gb
Re: Audio sample rate and latency in live sound.
« Reply #26 on: January 20, 2022, 08:18:10 pm »
Group delay can be fractional (but still proportional to the sampling period for a *given* filter structure)

And it can be constant (-> linear phase FIR filter), or frequency-dependent (-> IIR filter or analog filter).
The group delay is a single figure. The delay at individual frequencies may vary, according to the type of filter being used..
 

Offline jmibk

  • Regular Contributor
  • *
  • Posts: 68
  • Country: at
Re: Audio sample rate and latency in live sound.
« Reply #27 on: January 22, 2022, 05:30:08 pm »
DSPs in general are normal processors like an Arduino or ESP32 - but they have basic audio functionality in hardware - like FIR filters, delay blocks and so on. 

Most DSPs use ringbuffers for the input and the output samples. Ring buffers are arrays with a bitlength of your sample bit depth (16bit, 24bit) and have a array size of 256 or 512 - this is also known as buffer size.
The index of the array is increasing with every incoming sample, when the array reaches its end, it continues at zero. 256 and 512 are 8 or 9 bit values, so you can use simple index++ commands and you have not care about overrunning. That means, that the incoming samples are stored in a ring of values.

The processor takes these values and processes them. A gain of 6dB is simply a multiplication of 2 for example.
After processing the processor places the new sample into the output ring buffer, where the hardware outputs them via the I2S interface for example to transfer the data to a DAC or sample rate converter.

The audio delay rely to the buffer size and the fill time of the ring buffer. Adequate buffer size is needed for the precision of filters, compressors and so on. They will do a bad job with a buffer size of 4 for example. So the buffer size is mostly fixed at 256 or 512.
The sample frequency is the time it takes to fill the buffer completely with new data.
At 48kHz it takes 5ms to fill a buffer of 256 samples.
At 96kHz the same takes 2.5ms.

So - you are right, that the sample rate affects the audio delay - but wait - there is more. Programming processing blocks is more difficult. At some tasks a few samples are enough to get a new output value. For a simple gain stage you don't need past samples to calculate a new gain of a sample. So the delay is nearly zero.

Most output ring buffers index are trailing the index of the input buffer. So you can fasten up the input to output delay. If you do so, the buffer size doesn't matter regarding audio delay. In that case also the fill time of the complete buffer doesn't matter and has also barely no effect to the audio delay.

Whats the source for the audio delay, if buffer size and samplerate doesn't affect the audio delay?
Its all about algorythms. Filters need the past samples to calculate your filters impulse resonse. Impulse response corelates directly with frequency response but in the time domain (time on the horizontal axis instead of frequency).
The more samples they get the higher is the filter precision, freqnecy stability of your pass-band and stop-band and so on. There's the difference between IIR (infinite impulse resonse) and FIR (finite inpulse response) filters. IIR filters take much less past samples but they use the previous calculation for the current calculation - like a feedback of an amplifier.
FIR filters have fixed filter coefficients and they need the same amount of samples out of the input buffer than they have filter coefficients. More coefficients the more quality the filter has. And the longer it takes for a result that comes through the filter.

Thats cratching on the surface of DSP theory but should get you closer to digital audio and maybe audio delay.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf