Author Topic: Audio sample rate and latency in live sound.  (Read 30131 times)

0 Members and 1 Guest are viewing this topic.

Offline Jr460

  • Regular Contributor
  • *
  • Posts: 142
Re: Audio sample rate and latency in live sound.
« Reply #25 on: January 20, 2022, 08:12:38 pm »
Some real world stuff.

96k sample rate for live sound to me seems silly.   Yes you can do, but why.   

Go into a studio where you may be using all kinds of plug-ins to process, and some of them work a bit better, sound better, at 96k.   And you are making something the best you can that has to stand up as good mix for the next 30+ years, a different level of care compared to live with no do-overs, additional takes, or tweak the mix.

I've never really heard anything better at 96k.  44.1 (CD sample rate) to 48k, yea I can hear that.   Most stuff I've seen defaults to 48k.   Also bit depth, not sure the best point, but 16bits normally doesn't cut it for me, 24 is better.   I can't hear a difference at 32bits.   So for me, I run my systems at 48k/24bit.

Let's talk about what a delay of same 1ms means.   In air that is about 1 foot.   10 feet for 10ms, that should not mess up the groove of good players.  I've see/heard what happens when the bass player has a wireless and runs way out into crowd.  The groove/feel goes all to hell.  ( but it looked great him and the lead singer that found the stairs to platform for the big video wall looking over the block ).

That same 10ms is deadly if your have people with in-ear monitors.   Why?  If you sing, you hear youself two ways, one bone conduction and the other from the ear piece.   When they are close to same level, unlike if you have a floor wedge, and the relative delay is not changing, you get comb filtering.   If you have floor wedge, just a small movement of your head and the comb shifts and your brain averages that all out, not so with in-ears.

My main board is rated at 1.5ms from an analog input, converters, processing, output convertors, and to an analog output and runs at 48k.   I have a smaller older board I can put into 96K rate, but it changes none of system delays.   BUT at 96k, all my built in effects are cut in half.   That a deal killer for me.

Even if I may have bad ears and your blind A/B testing says to you that 96k sounds better, OK.   But in a live situation you are going to have more trouble getting source captured well.   You are not using the same mics as in a studio ( yea, take the nice sounding ribbon outside in gusts of wind), and you are not going to have a good sounding room and good isolation.  Big stadium, amp cabinets under stage in iso boxes, yea, you are getting closer.   Now how about the mic pre-amps and converters, are they up to snuff?  96k is not going to make what was captured with issues any better.

If your overall latency is a couple of ms from singer mic to their in-ears, then leave it alone.   Even if your board cuts it in half at 96k, they are not going to hear it.   Sure you still get a comb filter, but now it is up much higher where they have lost most of that range from standing to close to drummer that smacks the brass so hard they break sticks.   No in-ears, than forget it, you can go with long delays from your Smackie or Bear-ringer board.

(Note, if you are using Dante as the transport to/from the mixer, you can set/see the guaranteed larceny, and control it by the using less hops/switches)

My $.02
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9283
  • Country: gb
Re: Audio sample rate and latency in live sound.
« Reply #26 on: January 20, 2022, 08:18:10 pm »
Group delay can be fractional (but still proportional to the sampling period for a *given* filter structure)

And it can be constant (-> linear phase FIR filter), or frequency-dependent (-> IIR filter or analog filter).
The group delay is a single figure. The delay at individual frequencies may vary, according to the type of filter being used..
 

Offline jmibk

  • Regular Contributor
  • *
  • Posts: 68
  • Country: at
Re: Audio sample rate and latency in live sound.
« Reply #27 on: January 22, 2022, 05:30:08 pm »
DSPs in general are normal processors like an Arduino or ESP32 - but they have basic audio functionality in hardware - like FIR filters, delay blocks and so on. 

Most DSPs use ringbuffers for the input and the output samples. Ring buffers are arrays with a bitlength of your sample bit depth (16bit, 24bit) and have a array size of 256 or 512 - this is also known as buffer size.
The index of the array is increasing with every incoming sample, when the array reaches its end, it continues at zero. 256 and 512 are 8 or 9 bit values, so you can use simple index++ commands and you have not care about overrunning. That means, that the incoming samples are stored in a ring of values.

The processor takes these values and processes them. A gain of 6dB is simply a multiplication of 2 for example.
After processing the processor places the new sample into the output ring buffer, where the hardware outputs them via the I2S interface for example to transfer the data to a DAC or sample rate converter.

The audio delay rely to the buffer size and the fill time of the ring buffer. Adequate buffer size is needed for the precision of filters, compressors and so on. They will do a bad job with a buffer size of 4 for example. So the buffer size is mostly fixed at 256 or 512.
The sample frequency is the time it takes to fill the buffer completely with new data.
At 48kHz it takes 5ms to fill a buffer of 256 samples.
At 96kHz the same takes 2.5ms.

So - you are right, that the sample rate affects the audio delay - but wait - there is more. Programming processing blocks is more difficult. At some tasks a few samples are enough to get a new output value. For a simple gain stage you don't need past samples to calculate a new gain of a sample. So the delay is nearly zero.

Most output ring buffers index are trailing the index of the input buffer. So you can fasten up the input to output delay. If you do so, the buffer size doesn't matter regarding audio delay. In that case also the fill time of the complete buffer doesn't matter and has also barely no effect to the audio delay.

Whats the source for the audio delay, if buffer size and samplerate doesn't affect the audio delay?
Its all about algorythms. Filters need the past samples to calculate your filters impulse resonse. Impulse response corelates directly with frequency response but in the time domain (time on the horizontal axis instead of frequency).
The more samples they get the higher is the filter precision, freqnecy stability of your pass-band and stop-band and so on. There's the difference between IIR (infinite impulse resonse) and FIR (finite inpulse response) filters. IIR filters take much less past samples but they use the previous calculation for the current calculation - like a feedback of an amplifier.
FIR filters have fixed filter coefficients and they need the same amount of samples out of the input buffer than they have filter coefficients. More coefficients the more quality the filter has. And the longer it takes for a result that comes through the filter.

Thats cratching on the surface of DSP theory but should get you closer to digital audio and maybe audio delay.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf