When that patent was issued I was working at the Mobil R&D lab in Farmers Branch having been laid off from ARCO R&D in 1991. The patent is an example of ignorance recreating what was well known in another field. It is a huge problem, especially in DSP. I've read far too many papers only to realize at the end it was just another wheel that had been around for a long time with a different name. The "Modified Discrete Fourier Transform" of claim 33 in the Lake patent is just the standard real to complex FFT which had been around since the 1950's, though not well known as it was being done by hand with a desk calculator. I had the privilege of having a few drinks with someone whose name I forget who was doing FFTs in the mid 1950's. Mention of it was made in print, but it never attracted any attention until after the Cooley-Tukey paper.
An echo is a multiply-add operation, scale and sum. If the echo arrives in between time samples then it must be interpolated using a sinc(t) operator. That requires 8 multiply-add operations to reduce the error to a few percent.
I typically use FFTs to implement convolutions because it is both faster and more flexible. But I've only worked on recorded data.
When I was at UT Austin, I wrote a paper for a class which was a complete analysis of the reflection and transmission response of a plane layered half space with a free surface boundary with the source and receiver embedded at arbitrary locations in the medium. That coda is infinitely long. In the Z transform notation I used it has the form of a fraction. The denominator produces terms which continue forever. The problem is the geophysical equivalent of including echos from both sides of the wall and the other rooms in the building. I should note that more than one dimension becomes very difficult and is not amenable to a closed form expression as is the 1D case.
The classic example is reverberation in a water layer or between two walls. This was first implemented in analog form using a tape recorder with feedback from the output to the input. It is far simpler to do by feeding the output of a delay line back into the input suitably scaled. This is precisely what tape, spring and plate reverbs do.
An IIR filter is one in which the Z transform has terms in the denominator. An FIR filter only has terms in the numerator. The classic IIR example is:
1/(1 + R*Z**a) which evaluates by polynomial division to 1 +R* Z**a + R**2*Z**2*a + R**3*Z**3*a......
That's the denominator for reflection between a surface with a reflection coefficient of 1 and another surface with reflection coefficient of "R" with a delay of "a" for the propagation. The proper expression is more complex and having done this 30 years ago I should have to look it up to get it right. In this context there is no need.
The point is this. A data stream passing through a DSP system need only have terms added to the input at the appropriate delays. If the reflecting surface is moving, then the delay changes at each bounce.
Designing a filter to reproduce the room ambiance from a recording is certainly easily done in the frequency domain via an FFT. But it does not need to be implemented that way. The design of filters to remove reverberation effects was introduced by Norbert Wiener around 1940 and is called a "prediction error filter". The first example was done by his student, Enders Robinson, by hand over the course of a summer at MIT in the early 50's. My PhD supervisor at Austin's claim to fame was implementing a dereverberation filter using a magnetic drum and movable recording and playback heads in the late 50's. Millions of hours of computer time have been spent doing "predictive deconvolution" of seismic data.
The compute is trivial now, but when I started with Amoco in 1982 we started getting 120 trace data back from the boats. All of a sudden the decon step in he processing was taking several weeks to complete. All this was being done on a large IBM mainframe in Chicago with an attached vector processor. These were tape to tape jobs, so each step was a single compute job. The predictive decon step needed over a week to run at normal priority. (This was a multi processing batch system) As a result, it would be aborted when they took the machine down for the Sunday maintenance period. The job would then be restarted at a higher priority, but would again fail to complete. So it would restart again. Eventually it would run at very high priority which allowed it to complete, but forced other jobs to abort and go through the same cycle. Once we found the cause we just broke the data into several pieces, ran them and then merged them after decon.
Yes, I know. tl;dr