Author Topic: Making a sound camera  (Read 9969 times)

0 Members and 1 Guest are viewing this topic.

Offline dasloloTopic starter

  • Regular Contributor
  • *
  • Posts: 63
  • Country: fr
  • I saw wifi signal once
Making a sound camera
« on: April 30, 2018, 11:27:42 pm »
Hello,
Wouldn't it be nice to see sound, like this?
The common way to do that is with a FPGA pulling FFT of each microphone and calculating some phase delta.
Is it possible to do that without resorting to programming, maybe even using analog components?

In other word:
M-M-M
|   |   |
M-M -M
|   |   |
M-M-M
for each M determine the phase shift with the neighbor M of the frequency with the highest amplitude
light up the LED on the other side of the PCB for each M with (phase shift if it's positive, meaning the sound is ahead of the neighbor mics) * amplitude

EDIT: changed the title, since we're still figuring out how to make this thing.
« Last Edit: May 01, 2018, 11:04:31 pm by daslolo »
nine nine nein
 

Offline Sparker

  • Regular Contributor
  • *
  • Posts: 56
  • Country: ru
The dumbest solution to this problem coming to my mind is:
If you have a sinewave with frequency w and a delayed sinewave with same frequency phi, you multiply them and:
cos(w*t)*cos(w*t + phi) = 0.5*cos(2*w*t + phi) + 0.5*cos(phi)
then you can low-pass filter it to get rid of the high frequency part, and the low-frequency part depends only on to the phase difference of the sinewaves  :-+.
Now here comes a problem, for different frequencies the same phase shift means different distance traveled: delta-phi = 2*pi*distance/wavelength, so you must have some way to differentiate different frequencies. Really this thing is just asking for DSP and FFT.  :)
« Last Edit: April 30, 2018, 11:55:13 pm by Sparker »
 

Offline dasloloTopic starter

  • Regular Contributor
  • *
  • Posts: 63
  • Country: fr
  • I saw wifi signal once
I like your solution, it sings.
I'd really like to not use chips that I have to program so maybe have each microphone go through N filters that barely overlap before going to their own phase detectors.
M-F(n)-PHD(n)            \
   -F(n+1)-PHD(n+1)   - combine to form RGB - LED
   -F(n+2)-PHD(n+2)   /
    ....
nine nine nein
 

Offline MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
M-M-M
for each M determine the phase shift with the neighbor M of the frequency with the highest amplitude
The problem that, each M has it's own highest magnitude frequency.  FFT is the only solution, not necessarily FPGA - small uCPU board like stm32 , sam3x or atmega328 would be sufficient
 

Offline dasloloTopic starter

  • Regular Contributor
  • *
  • Posts: 63
  • Country: fr
  • I saw wifi signal once
M-M-M
for each M determine the phase shift with the neighbor M of the frequency with the highest amplitude
The problem that, each M has it's own highest magnitude frequency.  FFT is the only solution, not necessarily FPGA - small uCPU board like stm32 , sam3x or atmega328 would be sufficient
In my experience with the esp32 ADC conversion takes time, 1uS on the spec sheet, way more if accessed via Arduino lib (1K sample take 10 ms!!!) and I think (haven't figured out how to test that) that if I ADC one pin, then another one, the other one will be sampled 1uS after the first one, so there will be already phase shift caused by sampling.
Now I may be wrong and maybe there is a way to freeze all the ADC buckets at the same time.
nine nine nein
 

Offline Sparker

  • Regular Contributor
  • *
  • Posts: 56
  • Country: ru
You probably need N ADCs or ADCs that can do simultaneous sampling from many channels and then give you data back in serial. They must be clocked simultaneously, but when the conversion is over you can use whatever method to read data from them before the next conversion has started. Maybe their serial interfaces can be connected in serial? Maybe they have parallel interfaces and you can just connect them in parallel to the reading port of your MCU and use chip select of specific ADC to read the data from it.  :-// So many possibilities and many kinds of ADCs.
If you care only about 0...10 kHz frequency band, you can sample at about 20 kHz, which is 50 us between samples. Probably an AVR can handle this data rate easily.
Just store this data into a buffer, as many samples as you can, then you can do whatever you like with it. I'd offload it to matlab/octave and try to process it there first.
 

Offline MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
if I ADC one pin, then another one, the other one will be sampled 1uS after the first one, so there will be already phase shift caused by sampling.
Now I may be wrong and maybe there is a way to freeze all the ADC buckets at the same time.
You don't need sampling to be  synchronous. Sure, with one adc there will be phase offset, but it's a CONSTANT defined by sampling rate.  Just subtract it out of the phase difference.
 
The following users thanked this post: Sparker, werediver

Offline dasloloTopic starter

  • Regular Contributor
  • *
  • Posts: 63
  • Country: fr
  • I saw wifi signal once
if I ADC one pin, then another one, the other one will be sampled 1uS after the first one, so there will be already phase shift caused by sampling.
Now I may be wrong and maybe there is a way to freeze all the ADC buckets at the same time.
You don't need sampling to be  synchronous. Sure, with one adc there will be phase offset, but it's a CONSTANT defined by sampling rate.  Just subtract it out of the phase difference.
This dawned on me was I was driving to buy chocolate! :D
How do I evaluate phase shift ?
nine nine nein
 

Offline Sparker

  • Regular Contributor
  • *
  • Posts: 56
  • Country: ru
Indeed you don't need to sample synchonously. :palm: It just must be periodic sampling, obviously.
For phase shift check the "Shift in time" DFT property: https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform
So at the DFT output you'll have phases of frequency bins increased proportionally to the frequency of current bin and proportionally to the time shift.
« Last Edit: May 01, 2018, 02:29:44 am by Sparker »
 

Offline dasloloTopic starter

  • Regular Contributor
  • *
  • Posts: 63
  • Country: fr
  • I saw wifi signal once
Is the phase of an FFT the imaginary component?
Could using N microphones that output PWM instead of analog help lift some load off the uC?
nine nine nein
 

Offline Sparker

  • Regular Contributor
  • *
  • Posts: 56
  • Country: ru
No, phase is the angle between the vector and the real axis, like the phase used in a complex amplitude or a complex impedance.

Quote
Could using N microphones that output PWM instead of analog help lift some load off the uC?
What do you mean?

Also I don't understand how the whole signal processing from NI works. Maybe someone could explain? :)
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Are you sure you are talking phase delta? That would be different for different frequencies....

Don't you just want delay, which will be constant for all frequencies? (The ~ 3 ms per meter of difference in distance travelled)

Or am I confused? ( I often am)


Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: The Soulman

Offline dasloloTopic starter

  • Regular Contributor
  • *
  • Posts: 63
  • Country: fr
  • I saw wifi signal once
Are you sure you are talking phase delta? That would be different for different frequencies....

Don't you just want delay, which will be constant for all frequencies? (The ~ 3 ms per meter of difference in distance travelled)

Or am I confused? ( I often am)

Phase sounds cool but maybe you are right.
The end goal is to show a blob at the mic that gets hit by the sound first on the array.
nine nine nein
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Hum. First guess.

Look for a distictive sound (e.g  sudden loudness) on one center channel.

Window it (fade either end) so you get the pattern you are looking for. Make the window smaller than the FFT size (e.g. half the FFT size).

FFT it, so you get the complex spectrum signature you are looking for.

Take all the channels. FFT them to get a frequency specturm for each channel.

Multiply each by the signature's spectrum.

Inverse FFT each of the resulting spectums. That will give you a time series of how well the signal matches the signature pattern. You should have a spike in each channel (and maybe echos).

Identify the principal spike in each of the channel. Use the relative delay to triangulate the source.

(I used this sort of technique to find direct synthesis spread spectrum signals once)

Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline Sparker

  • Regular Contributor
  • *
  • Posts: 56
  • Country: ru
I guess that since we want to measure time delays between channels, the cross-correlation function fits more this need.
Imagine you have two mics, both hit by the same signal with unknown delay. You record the signals and cross-correlate them. The peak at the cross-correlation function output will be at the time which equals the time offset  :-+.
Now add a second dimension orthogonal to the first  one and multiply the two cross-correlation function outputs by each other(but one is horizontal, the other is vertical) and you will have a 2D image with a peak rotated towards the sound source. Pass it though some 2D filter (like the gaussian smoothing filter) and, I guess, you should get the same kind of result as the guys in the video.  ::)
Probably three mics will be sufficient for that, one in the middle, the other one X centimeters above it, third one X centimeters to the right of it.  :-/O
« Last Edit: May 01, 2018, 12:59:55 pm by Sparker »
 

Offline dasloloTopic starter

  • Regular Contributor
  • *
  • Posts: 63
  • Country: fr
  • I saw wifi signal once
@Sparker and @hamster_nz you're both describing the same thing in your own words. I have no idea to do that at the moment but it looks like FFT is the key anyway. The good thing is I don't need much visual resolution in term of displaying the frequencies so a 128 bin FFT should be enough. And this is very fast @3ms on an esp32.

And earlier I was thinking of using i2s mems to bypass ADC conversion time ... it seemed like a good idea but how long will transfering the i2s stream take? I think it'll take long than the instant voltage read and it equates to moving the time delay from conversion to data transfer. Maybe someone can confirm?
nine nine nein
 

Offline MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Don't know about esp32, but as any digital bus I2C trasfer time is defined by clock rate * number of bits. I2C standard AFAIK supports 100k, 400k, and 3.4M clock. See what is the clock of your esp32  driver. For myself, I usually  measure arduino speed performance using any spare digital pin and a scope, setting high/low at the start and the end of time critical function.
millis() also does same thing, in software.
Regarding size of the fft, depends on lower frequency range & bandwidth. For example, 10 k sampling and 128 bins provides just 10000/128 = 78.125 Hz freq. resolution. It means, you can't measure below this value, and what is more important, error 100% at 78 Hz, and 10% or so (not sure if it's linear, but you get an idea, and do math work to verify ) at 780 Hz, that may not be acceptable.

Cross correlation is the same thing as DFT. Has very little or zero practical value, since FFT about thousands time faster. 
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6719
  • Country: nl
Is it possible to do that without resorting to programming, maybe even using analog components?
If you limit the frequency of the signals and then clip them, you could probably do a phase detector a 4046 ... as long as the delay is smaller than the inverse of the highest frequency it can determine the delay. That would only work in a tiny cone though. Doing more than that discretely would quickly get very hard AFAICS.
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Don't know about esp32, but as any digital bus I2C trasfer time is defined by clock rate * number of bits. I2C standard AFAIK supports 100k, 400k, and 3.4M clock.

I am afraid that you confused I2S with I2C which is totally different animal:

And earlier I was thinking of using i2s mems to bypass ADC conversion time ... it seemed like a good idea but how long will transfering the i2s stream take?

Any ADC will delay signal for some time, so it does not matter - it is i2s ADC or not. Mems microphones with i2s will be fine indeed.
« Last Edit: May 01, 2018, 08:53:04 pm by ogden »
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Is it possible to do that without resorting to programming, maybe even using analog components?
If you limit the frequency of the signals and then clip them, you could probably do a phase detector a 4046 ... as long as the delay is smaller than the inverse of the highest frequency it can determine the delay. That would only work in a tiny cone though. Doing more than that discretely would quickly get very hard AFAICS.

Such clipping approach would work only in textbook example for single signal source of clean sine tone, not real world application where various complex sounds are coming from multiple sources. This application indeed needs DSP processing, thus programming.
 

Offline MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Don't know about esp32, but as any digital bus I2C trasfer time is defined by clock rate * number of bits. I2C standard AFAIK supports 100k, 400k, and 3.4M clock.

I am afraid that you confused I2S with I2C which is totally different animal:
Ooops, mea culpa, misreading.  Didn't know esp32 could support I2S. I interfaced arduino DUE to  Wm8731 over I2S /DMA, but it's for 1 CODEC - 2 channels In /Out only.  Though bringing I2S into picture would make project much harder to build /program.
Here is a piece of code, to give an overview of complexity:
Code: [Select]
void DMAC_Handler(void)
{
  uint32_t dma_status;

//  digitalWrite( test_pin, HIGH);
  dma_status = DMAC->DMAC_EBCISR;
  // BUG : ISR = if (dma_status & (DMAC_EBCIER_CBTC0 << SSC_DMAC_RX_CH))
  if (dma_status & (DMAC_EBCISR_BTC0 << SSC_DMAC_RX_CH)) {
    flag_adcv = 1;
    }

  if (dma_status & (DMAC_EBCISR_BTC0 << SSC_DMAC_TX_CH)) {
    flag_dacv = 1;
    }
  if(flag_adcv && (!flag_dacv))  adcdac_first = 1; 
  if((!flag_adcv) && flag_dacv)  adcdac_first = 2; 
//  digitalWrite( test_pin,  LOW);
}

void ssc_dma_cfg()
{
  desc[0].ul_source_addr      = (uint32_t)(&SSC->SSC_RHR);
  desc[0].ul_destination_addr = (uint32_t) inp[0];
  desc[0].ul_ctrlA = DMAC_CTRLA_BTSIZE(INP_BUFF) |/* Set Buffer Transfer Size */
                     DMAC_CTRLA_SRC_WIDTH_WORD |  /* Source transfer size is set to 32-bit width */
                     DMAC_CTRLA_DST_WIDTH_WORD;   /* Destination transfer size is set to 32-bit width */

  desc[0].ul_ctrlB = DMAC_CTRLB_SRC_DSCR_FETCH_DISABLE |
                     DMAC_CTRLB_DST_DSCR_FETCH_FROM_MEM |           
                     DMAC_CTRLB_FC_PER2MEM_DMA_FC |                 
                     DMAC_CTRLB_SRC_INCR_FIXED |             
                     DMAC_CTRLB_DST_INCR_INCREMENTING;       
  desc[0].ul_descriptor_addr = (uint32_t) &desc[1];

  desc[1].ul_source_addr      = (uint32_t)(&SSC->SSC_RHR);
  desc[1].ul_destination_addr = (uint32_t) inp[1];
  desc[1].ul_ctrlA = DMAC_CTRLA_BTSIZE(INP_BUFF) |   
                     DMAC_CTRLA_SRC_WIDTH_WORD |                 
                     DMAC_CTRLA_DST_WIDTH_WORD;                   

  desc[1].ul_ctrlB = DMAC_CTRLB_SRC_DSCR_FETCH_DISABLE |
                     DMAC_CTRLB_DST_DSCR_FETCH_FROM_MEM |           
                     DMAC_CTRLB_FC_PER2MEM_DMA_FC |                 
                     DMAC_CTRLB_SRC_INCR_FIXED |             
                     DMAC_CTRLB_DST_INCR_INCREMENTING;           
  desc[1].ul_descriptor_addr = (uint32_t) &desc[0];       
 
  DMAC->DMAC_CH_NUM[SSC_DMAC_RX_CH].DMAC_SADDR = desc[0].ul_source_addr;
  DMAC->DMAC_CH_NUM[SSC_DMAC_RX_CH].DMAC_DADDR = desc[0].ul_destination_addr;
  DMAC->DMAC_CH_NUM[SSC_DMAC_RX_CH].DMAC_CTRLA = desc[0].ul_ctrlA;
  DMAC->DMAC_CH_NUM[SSC_DMAC_RX_CH].DMAC_CTRLB = desc[0].ul_ctrlB;
  DMAC->DMAC_CH_NUM[SSC_DMAC_RX_CH].DMAC_DSCR  = desc[0].ul_descriptor_addr;

  DMAC->DMAC_CHER = DMAC_CHER_ENA0 << SSC_DMAC_RX_CH;
  ssc_enable_rx(SSC);

// transmit
  desc[2].ul_source_addr      = (uint32_t) out[0];
  desc[2].ul_destination_addr = (uint32_t) (&SSC->SSC_THR);
  desc[2].ul_ctrlA = DMAC_CTRLA_BTSIZE(INP_BUFF) |
                     DMAC_CTRLA_SRC_WIDTH_WORD | 
                     DMAC_CTRLA_DST_WIDTH_WORD;   

  desc[2].ul_ctrlB = DMAC_CTRLB_SRC_DSCR_FETCH_FROM_MEM |
                     DMAC_CTRLB_DST_DSCR_FETCH_DISABLE |       
                     DMAC_CTRLB_FC_MEM2PER_DMA_FC |                 
                     DMAC_CTRLB_SRC_INCR_INCREMENTING |             
                     DMAC_CTRLB_DST_INCR_FIXED;                       
  desc[2].ul_descriptor_addr = (uint32_t) &desc[3];       

  desc[3].ul_source_addr      = (uint32_t) out[1];
  desc[3].ul_destination_addr = (uint32_t) (&SSC->SSC_THR);
  desc[3].ul_ctrlA = DMAC_CTRLA_BTSIZE(INP_BUFF) |
                     DMAC_CTRLA_SRC_WIDTH_WORD |               
                     DMAC_CTRLA_DST_WIDTH_WORD;               

  desc[3].ul_ctrlB = DMAC_CTRLB_SRC_DSCR_FETCH_FROM_MEM |
                     DMAC_CTRLB_DST_DSCR_FETCH_DISABLE |       
                     DMAC_CTRLB_FC_MEM2PER_DMA_FC |                 
                     DMAC_CTRLB_SRC_INCR_INCREMENTING |             
                     DMAC_CTRLB_DST_INCR_FIXED;                       
  desc[3].ul_descriptor_addr = (uint32_t) &desc[2];       

  DMAC->DMAC_CH_NUM[SSC_DMAC_TX_CH].DMAC_SADDR = desc[2].ul_source_addr;
  DMAC->DMAC_CH_NUM[SSC_DMAC_TX_CH].DMAC_DADDR = desc[2].ul_destination_addr;
  DMAC->DMAC_CH_NUM[SSC_DMAC_TX_CH].DMAC_CTRLA = desc[2].ul_ctrlA;
  DMAC->DMAC_CH_NUM[SSC_DMAC_TX_CH].DMAC_CTRLB = desc[2].ul_ctrlB;
  DMAC->DMAC_CH_NUM[SSC_DMAC_TX_CH].DMAC_DSCR  = desc[2].ul_descriptor_addr;

  DMAC->DMAC_CHER = DMAC_CHER_ENA0 << SSC_DMAC_TX_CH;
  ssc_enable_tx(SSC);
}

void init_dma() {
  uint32_t ul_cfg;

  pmc_enable_periph_clk(ID_DMAC);
  DMAC->DMAC_EN &= (~DMAC_EN_ENABLE);
  DMAC->DMAC_GCFG = (DMAC->DMAC_GCFG & (~DMAC_GCFG_ARB_CFG)) | DMAC_GCFG_ARB_CFG_ROUND_ROBIN;
  DMAC->DMAC_EN = DMAC_EN_ENABLE;

  ul_cfg = 0;
  ul_cfg = DMAC_CFG_SRC_PER(SSC_DMAC_RX_ID) |
           DMAC_CFG_SRC_H2SEL |
           DMAC_CFG_SOD_DISABLE | //SOD: Stop On Done
           DMAC_CFG_FIFOCFG_ALAP_CFG;

  DMAC->DMAC_CH_NUM[SSC_DMAC_RX_CH].DMAC_CFG = ul_cfg;
  DMAC->DMAC_CHDR = DMAC_CHDR_DIS0 << SSC_DMAC_RX_CH;

//transmit
  ul_cfg = 0;
  ul_cfg = DMAC_CFG_DST_PER(SSC_DMAC_TX_ID) |
           DMAC_CFG_DST_H2SEL |
           DMAC_CFG_SOD_DISABLE | //SOD: Stop On Done
           DMAC_CFG_FIFOCFG_ALAP_CFG;

  DMAC->DMAC_CH_NUM[SSC_DMAC_TX_CH].DMAC_CFG = ul_cfg;
  DMAC->DMAC_CHDR = DMAC_CHDR_DIS0 << SSC_DMAC_TX_CH;
//

  NVIC_EnableIRQ(DMAC_IRQn);

  DMAC->DMAC_EBCIER = DMAC_EBCIER_BTC0 << SSC_DMAC_RX_CH;

  DMAC->DMAC_EBCIER = DMAC_EBCIER_BTC0 << SSC_DMAC_TX_CH;

  DMAC->DMAC_EBCISR;
}

void init_ssc() {
  clock_opt_t        rx_clk_option;
  data_frame_opt_t   rx_data_frame_option;
  clock_opt_t        tx_clk_option;
  data_frame_opt_t   tx_data_frame_option;

  memset((uint8_t *)&rx_clk_option,        0, sizeof(clock_opt_t));
  memset((uint8_t *)&rx_data_frame_option, 0, sizeof(data_frame_opt_t));
  memset((uint8_t *)&tx_clk_option,        0, sizeof(clock_opt_t));
  memset((uint8_t *)&tx_data_frame_option, 0, sizeof(data_frame_opt_t));

  pmc_enable_periph_clk(ID_SSC);
  ssc_reset(SSC);

  rx_clk_option.ul_cks               = SSC_RCMR_CKS_RK;
  rx_clk_option.ul_cko               = SSC_RCMR_CKO_NONE;
  rx_clk_option.ul_cki               = SSC_RCMR_CKI;
  //1 = The data inputs (Data and Frame Sync signals)
  //    are sampled on Receive Clock rising edge. 
  rx_clk_option.ul_ckg               = SSC_RCMR_CKG_NONE; // bylo 0;
  rx_clk_option.ul_start_sel         = SSC_RCMR_START_RF_RISING;
  rx_clk_option.ul_period            = 0;
  rx_clk_option.ul_sttdly            = 1;

  rx_data_frame_option.ul_datlen     = BIT_LEN_PER_CHANNEL - 1;
  rx_data_frame_option.ul_msbf       = SSC_RFMR_MSBF;
  rx_data_frame_option.ul_datnb      = 1;//stereo
  rx_data_frame_option.ul_fslen      = 0;
  rx_data_frame_option.ul_fslen_ext  = 0;
  rx_data_frame_option.ul_fsos       = SSC_RFMR_FSOS_NONE;
  rx_data_frame_option.ul_fsedge     = SSC_RFMR_FSEDGE_POSITIVE;

  ssc_set_receiver(SSC, &rx_clk_option, &rx_data_frame_option);
  ssc_disable_rx(SSC);
//  ssc_disable_interrupt(SSC, 0xFFFFFFFF);

  tx_clk_option.ul_cks               = SSC_TCMR_CKS_RK;
  tx_clk_option.ul_cko               = SSC_TCMR_CKO_NONE;
  tx_clk_option.ul_cki               = 0; //example Atmel. bylo SSC_TCMR_CKI;
  //1 = The data outputs (Data and Frame Sync signals)
  //    are shifted out on Transmit Clock rising edge.
  tx_clk_option.ul_ckg               = SSC_TCMR_CKG_NONE; // bylo 0.
  tx_clk_option.ul_start_sel         = SSC_TCMR_START_RF_RISING;
  tx_clk_option.ul_period            = 0;
  tx_clk_option.ul_sttdly            = 1;

  tx_data_frame_option.ul_datlen     = BIT_LEN_PER_CHANNEL - 1;
  tx_data_frame_option.ul_msbf       = SSC_TFMR_MSBF;
  tx_data_frame_option.ul_datnb      = 1;
  tx_data_frame_option.ul_fslen      = 0; // :fsden=0
  tx_data_frame_option.ul_fslen_ext  = 0;
  tx_data_frame_option.ul_fsos       = SSC_TFMR_FSOS_NONE; //input-slave
  tx_data_frame_option.ul_fsedge     = SSC_TFMR_FSEDGE_POSITIVE;

  ssc_set_transmitter(SSC, &tx_clk_option, &tx_data_frame_option);
  ssc_disable_tx(SSC);
  ssc_disable_interrupt(SSC, 0xFFFFFFFF);
}

void pio_B_SSC(void)
{
// DUE: PA15(B)-D24, PA16(B)-A0, PA14(B)-D23 = DACLRC, DACDAT, BCLK 
  PIOA->PIO_PDR   = PIO_PA14B_TK;
  PIOA->PIO_IDR   = PIO_PA14B_TK;
  PIOA->PIO_ABSR |= PIO_PA14B_TK;
 
  PIOA->PIO_PDR   = PIO_PA15B_TF;
  PIOA->PIO_IDR   = PIO_PA15B_TF;
  PIOA->PIO_ABSR |= PIO_PA15B_TF;

  PIOA->PIO_PDR   = PIO_PA16B_TD;
  PIOA->PIO_IDR   = PIO_PA16B_TD;
  PIOA->PIO_ABSR |= PIO_PA16B_TD;
}
 
The following users thanked this post: daslolo

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6719
  • Country: nl
Such clipping approach would work only in textbook example for single signal source of clean sine tone, not real world application where various complex sounds are coming from multiple sources. This application indeed needs DSP processing, thus programming.
He talked about using 9 LEDs ... something which only works under limited circumstances doesn't seem like a huge problem to me. This is clearly more science fair than product.
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Such clipping approach would work only in textbook example for single signal source of clean sine tone, not real world application where various complex sounds are coming from multiple sources. This application indeed needs DSP processing, thus programming.
He talked about using 9 LEDs ...  something which only works under limited circumstances doesn't seem like a huge problem to me. This is clearly more science fair than product.

It does not matter you use 3x3 LEDs or HD display - clipping approach will work only in theory.

Siemens LMS Sound Camera is product indeed:


 

Offline dasloloTopic starter

  • Regular Contributor
  • *
  • Posts: 63
  • Country: fr
  • I saw wifi signal once
Yes that one. They all use logarithmic spiral. Anyone knows why?
@ogden What's the problem with the clipping approach?
@MasterT good thing the complexity reinforce my decision of going all analog.
By the way, speaking of ADC, are there ADC out there that do mass conversion all at once? Then I wouldn't have to spend time in the mCU and counter-offset the signals.

Anyway I got the FFT running, and having two cores is sweet, so I might as well use programming to make this thing.
https://github.com/laurentopia/M5-Signal-Multimeter
« Last Edit: May 01, 2018, 11:12:50 pm by daslolo »
nine nine nein
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Yes that one. They all use logarithmic spiral. Anyone knows why?

IMHO reason is Marketing. Spiral just looks better than straight beams.

Quote
@ogden What's the problem with the clipping approach?

Using clipping you lose signal amplitude information. Also - even quiet hiss riding on top of sound will add so much noise and "jitter" to phase measurements that w/o averaging it will not work. If you need averaging which more or less is DSP processing - then why don't do proper DSP from very beginning, why reduce resolution to 1bit to later struggle getting it (resolution) back?

Quote
@MasterT good thing the complexity reinforce my decision of going all analog.

I would love to see working, fully analog system :)

Quote
By the way, speaking of ADC, are there ADC out there that do mass conversion all at once? Then I wouldn't have to spend time in the mCU and counter-offset the signals.

There are flash ADCs' which are very fast. Again - processing will add delay. Sound propagation is not instant after all. So it does not actually matter - you find sound direction with 1ms, 2ms or 10ms delay.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf