Just a general question on how is the analog delay topology executed. I have an idea of how the signal in digital domain can travel, something like: ADC-processor-memory-DAC or something like that. But how they really do it in analog delays for a musical instrument for example.
The Bucket-Brigade Device was the most common way to get specific delay times for audio use. They work in much the same way as the Charge-Coupled Device (CCD) works for imaging, except instead of the input to the delay being a pixel voltage, it's a signal voltage fed in.
The BBD is an apt name because that's basically how it works. You have a string of charge "buckets" all in series. The input signal is "clocked" into the first bucket, and then on each clock, all of the charges in each of the buckets is transferred to the next bucket in the string. At the end of the line, the output of the last bucket is low-pass filtered and buffered to drive the delay line output. It is sampling, since on a regular basis a snapshot of the input signal is captured and dumped into that first bucket. But it is not quantization, as the signal in the buckets is never converted to a digital number. It remains just a voltage (a charge on a capacitor) through the entire delay line.
Delay time is set by two mechanisms, which are used together. One is simply the number of "buckets" in the string. More buckets, more delay. The second is the clock rate. Faster clocking means shorter delay time. The clock rate is the sample rate, so the lower the clock rate, the lower the bandwidth of the system. The upper limit to clock frequency is set by the clock drivers. Each bucket is connected to the previous and the next by FET switches, and the clock controls the gate of each, they're all connected in parallel. Thus the clock has to be able to drive the capacitance of hundreds of gates in parallel.