| Electronics > Beginners |
| Making a sound camera |
| << < (9/13) > >> |
| ogden:
--- Quote from: MasterT on May 02, 2018, 01:35:39 pm --- --- Quote from: Marco on May 02, 2018, 01:25:34 pm ---If you fill the imaginary input of a complex FFT with 0s, the output is symmetric ... that the index for the frequency bins counts up higher is irrelevant, half the information is completely redundant. --- End quote --- . Show your code, than talk. --- End quote --- Code can be 3rd party as well :) https://www.keil.com/pack/doc/CMSIS/DSP/html/group__RealFFT.html Most likely you are interested in just DSP package: https://github.com/ARM-software/CMSIS_5/releases/tag/5.3.0 |
| Sparker:
--- Quote from: MasterT on May 02, 2018, 01:27:35 am --- Right, cross-correlation outputs the time difference. FFT outputs the phase difference, but since we know the frequency & speed of sound in the air, it's a piece of cake to translate phase to time and back, whatever is more practical for final result. In direction finding an Angle to sound source is what has to be determined, though time difference or phase must be translated to angle, based on distance between two mic's. Second reason against cross-correlation is that it's run summary report for all bandwidth. If there are two and more sound sources ( always in real world) CC gives meaningless data. Same time FFT could easily distinguish 100's sound sources , as long as they don't overlap in bandwidth, or not completely overlap. Cutting 78 Hz slice out of 0-10 kHz band , sorting out sound patterns each of noise sources you could identify narrow band that is specific to each of them, and find a right direction.. Same apply to standing waves, reverberation, echoes especially in indoor environment, only FFT is capable to sort out and throw away a part of data pull, that is most distorted/ corrupted, and still resolve a trigonometry equation.. --- End quote --- I've just tested the cross-correlation in matlab and it could actually work nice with multiple noise-like signals with narrow self-correlation and low cross-correlations. But the method works bad with human speech because it's not as random as I initially assumed. :( DFT is the choice here indeed. What I don't understand is, if both sound sources occupy exactly the same frequency, like the car control panel in the video in this thread, how can the system differentiate them? |
| Marco:
It's all fine and well to say "use the DFT", but it doesn't really mean anything. A bunch of phase differences between microphones at given frequencies aren't that easy to convert to delay, the phases are modulo 2pi after all. If you want to pick out part of the spectrum, just multiply the FFT transformed microphone signals with a Fourier domain representation of a minimum phase bandpass filter, before multiplying them with each other and doing the inverse FFT. It's still cross correlation, just of band limited versions of the microphone signals. PS. don't forget to zero pad the microphone signals before doing the FFT. |
| ogden:
--- Quote from: MasterT on May 02, 2018, 05:29:01 pm ---It's crappy Radix-2, that is good for undeveloped third world tribes numba-umba. --- End quote --- Here we go 8) --- Quote ---And for whoever picking any BS posted on wiki pages, and than posting on this thread, that was started by OP who acknowledged his lack of software skills in first message. I don't see a point to continue this dispute here, should we start another fft related thread?. --- End quote --- Yes, please. Share your wizdom. I am especially interested if discussion can result in faster and/or smaller Cortex-M optimized complex FFT code than that in CMSIS DSP lib. |
| MasterT:
https://www.eevblog.com/forum/microcontrollers/fft-processing-using-ucpu/ |
| Navigation |
| Message Index |
| Next page |
| Previous page |