Author Topic: sound analyzer for automating quality checks?  (Read 5586 times)

0 Members and 1 Guest are viewing this topic.

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
sound analyzer for automating quality checks?
« on: December 14, 2018, 12:28:53 am »
Let's say for a quality check on a gearmotor, you want to check whether everything has been built correctly. If correct, the gearmotor should make a low humming noise, if not, it will make loud noise with a different sound profile. Damaged teeth might cause this for example.

The gearbox are coming off an assembly line and currently a person manually check the noise for each unit by turning it on/off. But there is already a station on the line that turns the motor on/off for function test and I wonder if there are sound analyzers that I can put there that can differentiate between a good/bad motor, and then send a signal to the PLC to accept or reject.

By the way, there is ambient noise too.

What do I need to buy to implement something like this? I'm guessing an industrial PC, a sound analyzer at the minimum. thanks
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #1 on: December 14, 2018, 03:06:58 am »
Put a piezotransducer on a golden reference and record it.  Do an FFT and  set up a mask test on the FFT.
Schatten sells a guitar transducer which is quite flat up to about 8 KHz for around $20.

I'd guess that you can't do a mask test on an FFT on most DSOs, but you should be able to store a reference FFT and do a difference.

Problem is, the people who write the FW for DSOs have no idea at all of how to use one.

So if you want to plot the hysteresis of a core material you have to buy an ancient LeCroy or build an analog integrator.  Cause we just ain't going to let you choose anything but an XY plot  of two adjacent raw input channels.  The A list OEMs want to sell you another instrument for that.  And at $20K for a DSO it's ridiculous that you can't plot CH1 vs  Integral(CH2).
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #2 on: December 15, 2018, 02:21:50 am »
any other ideas?
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #3 on: December 15, 2018, 03:05:45 am »
Record 10-20 seconds with  a good  microphone and sound card  to a PC.  Decide what you want your resolution bandwidth to be.  Break the long trace into pieces of the required length, FFT and sum them frequency by frequency using Octave or MATLAB.  Plot it with guard band curves on the PC screen.

 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12297
  • Country: au
Re: sound analyzer for automating quality checks?
« Reply #4 on: December 15, 2018, 03:22:09 am »
To get around the majority of the ambient noise, I might suggest you use a contact microphone that can be reliably placed for consistent pickup.  An isolation booth would be the preferred option, but I guess that won't be practical.  Even so, if it is possible to set up a couple of panels on two or three sides that have sound absorbent material fitted, this can reduce a lot of ambient noise.

Once you have the sound collection process sorted, there will need to be some signal analysis with a go/no go output.  This is going to be the bigger challenge and the solution will depend on just how precise the analysis will need to be - but the basic spectrum analyser part can be done with a cheap Arduino.  The go/no go assessment will require a bit more effort - unless someone has already written some code to do that.  Even so, tuning it in for defining the correct pass/fail mask could be fun.


Sounds like a fun project, actually.  Wish I had the time.
« Last Edit: December 15, 2018, 03:32:04 am by Brumby »
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #5 on: December 15, 2018, 02:52:43 pm »
With a stereo sound card you can play a WAV file with the outputs looped back to the inputs and correct all the analog and ADC errors.

There is quite a lot of processing to do this properly.  An Arduino is not going to do a very good job and will be a huge pain to make work.  Lower resolution ADC and it would require building the analog front end.  After all that work, even an ARM based Arduino is likely to lack the performance needed.  It's a lot more than doing a single FFT.

Microphones have poor low frequency response and piezo transducers have limited high frequency response,  My Schatten guitar transducer peaks at about 8 KHz and then rolls off when attached to the voice coil of a coneless tweeter and swept.  A stereo sound card will allow using both sensors.  A MEMS 3 axis accelerometer such as the ADXL335 will have a frequency response all the way to DC but would require 2 sound cards and a good deal more math. But it would perform better than a single axis accelerometer such as the Schatten.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #6 on: December 18, 2018, 03:16:54 am »
Thanks for the response so far.

I think I will go with the microphone/PC/FFT route. That's what I initially thought. I'm not really familiar with piezoelectrics.

The motors will arrive in group of 4. I can't record each one's sound individually because all 4 will be turned on at the same time for a function test. Testing motors individually will require too many change to the existing machine which adds cost and my company is pretty risk averse. It'll also slow the rate of production to an unacceptable level. This function test is just a part of an assembly line and I don't want it to become a bottleneck.

So with all 4 turned on at the same time, the idea is that if any one of them makes weird or loud noises, the FFT solution will still work hopefully (will it?) And all 4 will be sent to a manual rework station as a group.

While I get the rough idea to proceed, I hope you guys can give me more details on implementation, like what software to use. I heard MATLAB being mentioned. Another problem is that whatever decision the software makes, I somehow need to send that signal to the PLC (Siemens). How will I do that?

I'm a newbie right now to PLCs. This project will be a great learning experience.

Thanks
 

Offline johnwa

  • Frequent Contributor
  • **
  • Posts: 255
  • Country: au
    • loopgain.net - a few of my projects
Re: sound analyzer for automating quality checks?
« Reply #7 on: December 18, 2018, 08:40:52 am »
The search term you are looking for is "condition monitoring". There should be plenty of equipment available off the shelf.
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #8 on: December 18, 2018, 03:29:40 pm »
Octave is free.  MATLAB is not.  For commercial use it  was around $1500 the last time I looked.

Basic algorithm is as follows:

Collect about 10 seconds of data at 44.1 KSa/S (standard CD sampling)

Break it into 100 pieces 4096 samples long

Multiply the samples by weights from 0 to 1 to 0 with 1 in the middle

Compute the FFT of each of the pieces

Average the modulus (absolute value of a complex number) of all 100 pieces

Plot the first 2048 samples of the average.

The result will be the amplitude spectrum from 0 to 22.05 KHz at a resolution of 10.8 Hz

Schatten pickups are cheap:

https://www.stewmac.com/Pickups_and_Electronics/Pickups/Violin_Pickups/Schatten_Soundboard_Transducer.html

If you glue them to a disk magnet slightly larger diameter they should be durable and quick to install and remove.  They are fragile if not reinforced. They are limited to about 8 kHz but that should be quite adequate.  If the housing is not magnetic then they are not a viable solution.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #9 on: December 19, 2018, 12:28:49 am »
Octave is free.  MATLAB is not.  For commercial use it  was around $1500 the last time I looked.

Basic algorithm is as follows:

Collect about 10 seconds of data at 44.1 KSa/S (standard CD sampling)

Break it into 100 pieces 4096 samples long

Multiply the samples by weights from 0 to 1 to 0 with 1 in the middle

Compute the FFT of each of the pieces

Average the modulus (absolute value of a complex number) of all 100 pieces

Plot the first 2048 samples of the average.

The result will be the amplitude spectrum from 0 to 22.05 KHz at a resolution of 10.8 Hz


Thanks. Why break it into 100 pieces and take the average? Why multiply the samples with 0 to 1 using the method you mentioned? How would the intensity of the sound (distance from source to microphone) affect the DFT? Only the amplitude will change right?

Do you think it's necessary to do a DFT on the ambient noise as well and maybe filter the ambient noise out first?
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Re: sound analyzer for automating quality checks?
« Reply #10 on: December 19, 2018, 12:44:54 am »
You can get bare piezo discs for less than a dollar (or less in quantity), and wrapped in a small plastic case slightly less suitable for use as a contact mic. but more durable for under $2.
« Last Edit: December 19, 2018, 12:48:38 am by cdev »
"What the large print giveth, the small print taketh away."
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #11 on: December 19, 2018, 01:48:31 am »


Thanks. Why break it into 100 pieces and take the average? Why multiply the samples with 0 to 1 using the method you mentioned? How would the intensity of the sound (distance from source to microphone) affect the DFT? Only the amplitude will change right?

Do you think it's necessary to do a DFT on the ambient noise as well and maybe filter the ambient noise out first?

You average 100 samples to reduce the variance of your estimate.  The reason that most DFTs on DSOs are so bad is because the programmers did not do that. 

The triangular, Bartlett, window is so the spectral peaks are not smeared out.  In the frequency domain it's a (sin(x)/)**2 rather  than the sin(x)/x of a rectangular window.  I've yet to see a DSO that offered a Bartlett window.  Check wikipedia on the subject of window functions.  Install gnuplot, set samples to 10000, plot the two functions and take a look at the sidelobes.

The lower the amplitude the greater the quantization noise component.

I could compensate for the ambient noise, but I suggest you use a sound isolation box instead.  You really don't want to learn that much about DSP or spend the amount of time it would take to code it properly.  It is not trivial. 

Most of a seismic processor's time is spent suppressing noise.  How you do it depends upon what type of noise it is and where it is coming from.  Data acquisition contracts have many pages of stipulations about noise levels and there is a company representative on hand to check them continuously during the entire acquisition.  You can't shoot if another boat is closer than so many miles.  You can't shoot if the waves are higher than some value, etc.

Install Octave and get to the point you can do the steps I outlined previously.  You'll have a much better appreciation of the problem you have posed.  It's very sensible and readily done, but it is not something you do in an afternoon.  In your case, starting from square one it will probably take a week or more of work just to do a proof of concept.

Personally, if I were doing this I'd write a C program.  I'm not impressed by the software engineering of Octave.  I cannot compile it on Solaris.   It claims to be able to read WAV files and I think it will interface to a sound card, but frankly I don't know.  I use Octave from time to time to prototype solutions to problems, but that is all.  Octave/MATLAB are not really designed for solving production problems.  They are research tools.  I only suggested using those because of the complexity and the amount of work a production quality C program would require.

Octave/MATLAB are good enough to produce a proof of concept.  Make WAV format recordings of a good gearbox and a bad gearbox in a quiet environment and compare them using the process I outlined.  A Zoom H1 recorder for $100 will give you a good stereo WAV file to work with.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #12 on: December 21, 2018, 04:51:11 am »


Collect about 10 seconds of data at 44.1 KSa/S (standard CD sampling)



Thanks for your responses so far. Turns out I cannot collect 10 sec of sound, more like 0.5 sec to 1 sec. This is due to the nature of the existing production line. I can't slow down the rate, nor will they let me. Can 1 sec of sound sample work?

I'm thinking of using Python (with the scikit-learn and pyaudio package). I'm more familiar with Python, and it has a package to interface with the PLC too.  How powerful of a computer do I need for this? The processing must be done in matter of tens or hundreds of millisecond in order to not slow down the production.

 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #13 on: December 21, 2018, 12:31:15 pm »
You can use 10 samples.  The variance of the spectral estimate is 1/sqrt(N) which is why I recommended 100 samples.  Any PC will be fast enough.  An Arduino probably would not.  Also,  with a PC you can save the spectra for each unit.  That will be important for establishing the spectra for good units and bad units.  You can also then compute average spectra on an hourly or shift basis to look for problems caused by changes in the production process, for example ambient temperature.

If you collect 40960 time samples you can use 10  samples  4096 long with 11 Hz resolution or 20 samples 2048 long with 22 Hz resolution or 40 samples with 43 Hz resolution.  The variance of the estimates at 43 Hz resolution will be 1/2 the variance at 10 Hz resolution.

You should record a known good unit and a known bad unit for 1000 seconds and then compute the averaged spectra for the various window lengths to establish how many samples to average and what your resolution bandwidth needs to be.  It would be very desirable to collect recordings from more than one sample unit so that you get some data on the variance of the good and bad gearmotors themselves.

I'm not all that familiar with python, but I'm sure that it has all the features you need.  So if you are comfortable using that, use it.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #14 on: January 13, 2019, 11:36:41 pm »
You can use 10 samples.  The variance of the spectral estimate is 1/sqrt(N) which is why I recommended 100 samples.  Any PC will be fast enough.  An Arduino probably would not.  Also,  with a PC you can save the spectra for each unit.  That will be important for establishing the spectra for good units and bad units.  You can also then compute average spectra on an hourly or shift basis to look for problems caused by changes in the production process, for example ambient temperature.

If you collect 40960 time samples you can use 10  samples  4096 long with 11 Hz resolution or 20 samples 2048 long with 22 Hz resolution or 40 samples with 43 Hz resolution.  The variance of the estimates at 43 Hz resolution will be 1/2 the variance at 10 Hz resolution.

You should record a known good unit and a known bad unit for 1000 seconds and then compute the averaged spectra for the various window lengths to establish how many samples to average and what your resolution bandwidth needs to be.  It would be very desirable to collect recordings from more than one sample unit so that you get some data on the variance of the good and bad gearmotors themselves.

I'm not all that familiar with python, but I'm sure that it has all the features you need.  So if you are comfortable using that, use it.

Okay, I think I appreciate your response a lot more now. I first tried to record 1 sec of sound (sample rate =44100) and did DFT on all 44100 samples. The spectrum looked different each time, even as I tried to control the settings the best I can. I then downloaded a FFT spectrum analyzer to my smartphone and it turns out the spectrum is fluctuating quiet a bit.

So if I understand you, I will split the 1 second into ten 100ms chunks, do a DFT on each of those and average the result. I understand this decreases the variance. I do have some background in statistics, it's just the DSP part that I'm weak on. If I have less samples each time, my frequency resolution will be bigger (but this doesn't seem to be an issue), is that right?

Thanks
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #15 on: January 14, 2019, 12:16:12 am »
I recorded a 1 second clip of audio using pyaudio, sample rate 44100, format=pyaudio.paInt16.

I got 44100 samples, the values ranged from -100 to 100 roughly. I took the max and it's 102. The recording was pretty quiet. Had I made some noise, the values would've been in 5 figures. But anyway...

I took the FFT and plotted it, the max magnitude of the FFT is around 191689.921.

I want the magnitude of the FFT to match the amplitude of the input, if I remember correctly I have to divide FFT values by the number of samples, but that still would not get me anywhere close. I expect to see a max FFT magnitude of 102, what is wrong?

Thanks
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #16 on: January 14, 2019, 12:34:33 am »

Okay, I think I appreciate your response a lot more now. I first tried to record 1 sec of sound (sample rate =44100) and did DFT on all 44100 samples. The spectrum looked different each time, even as I tried to control the settings the best I can. I then downloaded a FFT spectrum analyzer to my smartphone and it turns out the spectrum is fluctuating quiet a bit.

So if I understand you, I will split the 1 second into ten 100ms chunks, do a DFT on each of those and average the result. I understand this decreases the variance. I do have some background in statistics, it's just the DSP part that I'm weak on. If I have less samples each time, my frequency resolution will be bigger (but this doesn't seem to be an issue), is that right?

Thanks

You've got the idea.

There are 3 normalizations used in FFTs.  1/N on the either the forward or the inverse or 1/sqrt(N) on both.  I prefer the latter. Also the exponent can be either +1 or -1 for the forward transform.  The inverse will be the opposite.

I strongly recommend getting a copy of

Random Data
Bendat & Piersol

I started with the 2nd ed, but also have the 3rd and 4th which is the last as Piersol passed away.  You should be able to get a 2nd ed very cheaply and it treats everything you need to deal with very thoroughly.
 

Offline L_Euler

  • Regular Contributor
  • *
  • Posts: 86
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #17 on: January 14, 2019, 01:09:43 am »
Get one of these, or similar and a piezo probe or microphone.  You can use GPIB to automate the testing, pass/fail, and data recording.
« Last Edit: January 14, 2019, 11:57:03 am by L_Euler »
There's no point to getting old if you don't have stories.
 
The following users thanked this post: engineheat

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #18 on: January 15, 2019, 03:46:24 am »
Get one of these, or similar and a piezo probe or microphone.  You can use GPIB to automate the testing, pass/fail, and data recording.

Thanks, I'll look into it.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #19 on: January 15, 2019, 03:53:34 am »

Okay, I think I appreciate your response a lot more now. I first tried to record 1 sec of sound (sample rate =44100) and did DFT on all 44100 samples. The spectrum looked different each time, even as I tried to control the settings the best I can. I then downloaded a FFT spectrum analyzer to my smartphone and it turns out the spectrum is fluctuating quiet a bit.

So if I understand you, I will split the 1 second into ten 100ms chunks, do a DFT on each of those and average the result. I understand this decreases the variance. I do have some background in statistics, it's just the DSP part that I'm weak on. If I have less samples each time, my frequency resolution will be bigger (but this doesn't seem to be an issue), is that right?

Thanks

You've got the idea.

There are 3 normalizations used in FFTs.  1/N on the either the forward or the inverse or 1/sqrt(N) on both.  I prefer the latter. Also the exponent can be either +1 or -1 for the forward transform.  The inverse will be the opposite.

I strongly recommend getting a copy of

Random Data
Bendat & Piersol

I started with the 2nd ed, but also have the 3rd and 4th which is the last as Piersol passed away.  You should be able to get a 2nd ed very cheaply and it treats everything you need to deal with very thoroughly.

Dumb question...you are supposed to average the magnitude of the spectra right? not the FFT (complex numbers)...

anyway, I got a crude version working. I used Python with the Pyaudio package and recorded 10 seconds of sound just for test. Sample rate =44k, each frame is 1024 samples. For each frame I plotted the magnitude, and made a dynamic plot thru the 10 seconds. It actually works. I was able to see the magnitudes change as I made various sound.

However, I also downloaded a FFT analyzer to my smartphone and compared the results as I turned on a motor. The smartphone app is able to display relatively constant spectrum (not much flutuations) right from the start. In my plot, the magnitudes are very high upon turning on the motor, and only "settled" after a couple of seconds.

I wonder why that is. Is it due to my sound card or laptop mic? I tested using another laptop and got similar result. Could it be I recorded in Mono mode? The spectrum changes too much as I move the motor, whereas on the smartphone the spectrum is more stable.

Is it because I didn't use a Window function?

Thanks
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #20 on: January 15, 2019, 05:14:17 am »

Dumb question...you are supposed to average the magnitude of the spectra right? not the FFT (complex numbers)...


Yes, that is correct.  You would need to synchronize the windows if you wanted to also get the phase information.

Quote

anyway, I got a crude version working. I used Python with the Pyaudio package and recorded 10 seconds of sound just for test. Sample rate =44k, each frame is 1024 samples. For each frame I plotted the magnitude, and made a dynamic plot thru the 10 seconds. It actually works. I was able to see the magnitudes change as I made various sound.

Each frame is  only 23 mS.  Try using a longer window 4096 or 8192 samples.

Quote

However, I also downloaded a FFT analyzer to my smartphone and compared the results as I turned on a motor. The smartphone app is able to display relatively constant spectrum (not much flutuations) right from the start. In my plot, the magnitudes are very high upon turning on the motor, and only "settled" after a couple of seconds.

I wonder why that is. Is it due to my sound card or laptop mic? I tested using another laptop and got similar result. Could it be I recorded in Mono mode? The spectrum changes too much as I move the motor, whereas on the smartphone the spectrum is more stable.

Is it because I didn't use a Window function?

Thanks

Probably.  You're asking me to guess what someone else's program is doing without being able to probe it.  Post some plots of the time domain and frequency domain without any averaging using the longer window.

If the first and last samples in the window are very different, the discontinuity will distort the spectrum.  Put the triangle taper on the window and it should be much more uniform from spectrum sample to spectrum sample.

Find a way to create a constant frequency tone, record it and then post plots of the time and frequency domain.  If nothing else, just whistle or hum with as constant a pitch as you can manage.

Then compare the spectrum you get if you start the recording before the tone and if you start the recording after the tone.

The triangle window will give you the sharpest spectral peaks.  Try it also using a cosine taper.   (cos(-pi) + 1)/2 to (cos(0) +1)/2 at each at the start and the reverse at the end.  Then vary the number of samples in the range from -pi to 0 so you change how steep the taper is.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #21 on: January 15, 2019, 01:17:03 pm »
Get one of these, or similar and a piezo probe or microphone.  You can use GPIB to automate the testing, pass/fail, and data recording.

Hi, can you program these devices to make pass/fail decision on its own (using your own algorithm) or do you need to connect a PC to grab the signal and perform custom analysis?

Thanks
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #22 on: January 15, 2019, 10:43:53 pm »
Attached is the algebra of correcting for sound card errors on a stereo sound card by playing a computed WAV file and recording it.  This is for each frequency in the spectrum.  Ideally you would do it at all amplitude levels to correct for non-linearities in  the ADC and DAC.  I've got a reputation for going a bit overboard. :)

It requires that you be able to play and record at the same time. It requires playing each channel back into itself and the opposite channel.  So 4 equations in 4 unknowns.

I'm cleaning up clutter and found this, and I thought it might be useful to someone.  This seemed a good place to post it.  It's essential to making a THD analyzer using a sound card.  Which, naturally, you will need if you read "Max Wein, Mr. Hewlett and a Rainy Sunday Afternoon" by Jim Williams and get motivated to build an ultra low distortion audio signal generator for testing audio gear.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #23 on: January 16, 2019, 01:01:02 am »
Attached is the algebra of correcting for sound card errors on a stereo sound card by playing a computed WAV file and recording it.  This is for each frequency in the spectrum.  Ideally you would do it at all amplitude levels to correct for non-linearities in  the ADC and DAC.  I've got a reputation for going a bit overboard. :)

It requires that you be able to play and record at the same time. It requires playing each channel back into itself and the opposite channel.  So 4 equations in 4 unknowns.

I'm cleaning up clutter and found this, and I thought it might be useful to someone.  This seemed a good place to post it.  It's essential to making a THD analyzer using a sound card.  Which, naturally, you will need if you read "Max Wein, Mr. Hewlett and a Rainy Sunday Afternoon" by Jim Williams and get motivated to build an ultra low distortion audio signal generator for testing audio gear.

Thanks.

Turns out the problem was due to the auto boost feature of the mic which I deactivated.

Now that I got my FFT correct. BTW, I applied a Hanning window to each frame prior to doing FFT. The data comes in both negative and positive quantities. I just applied the window (1024 length array) to the data.

Next step is to find features/attributes to differentiate the good and the bad based on the averaged FFT. I noticed bad products makes sounds more in the higher frequency range, whereas for good products the magnitudes are pretty similar across all frequencies (up to 6000k). What are some good attributes to try? I'm thinking average magnitudes across all frequencies (just in case bad ones are louder). Or perhaps average magnitudes across certain frequency ranges.

The ambient noise peaks at a few hundred Hz and sharply drops off. If I want to filter out the ambient noise (probably not necessary, but for learning's sake), isn't it as simple as subtracting the magnitudes? And then if I do an inverse FFT I should get a signal with only the motor sound right?

Thanks
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #24 on: January 16, 2019, 01:33:12 am »
 How much tolerance do you have for serious mathematics?  You don't need to actually learn the details anymore than you need to learn the details to do an FFT, but you will need to not run away in fear.

If you record a number of motors with various defects and motors which are good, you can do what I call a "sparse L1 pursuit".  The concept is extremely simple, but the mathematical proofs are the most difficult stuff I've ever encountered.  The proof of one theorem is 15 pages long!

In simple terms, you solve Ax=y where y is your measurement and A is a large array of *all* the possible cases you want to consider.  You then solve this with linear programming or some other L1 (least summed absolute error) solver.  I use GLPK which is excellent.  I've never had it fail.

Each column of the A matrix is a spectrum for an example device, good, bad, or intermediate.  The result of the  pursuit was named a "Dantzig selector" by Emmanuel Candes in  honor of the inventor of operations research and the simplex method.  What you get back is an x vector which is mostly zeros except for the particular set of columns whose sum best matches the DUT.  Given the genomes for a bunch of patients with some exotic disorder, this finds the particular genetic alleles involved.

This is absolute state of the art stuff.  I call it "sparse L1 pursuits" because there are numerous algorithms and applications.  This was the heart of the code that won the Netflix prize.  I've posted about this a good bit, so do a search of "sparse L1 pursuit" and my ID and read some of the stuff.

I stumbled across it by accident.  I was doing it and realized I was solving problems I'd been taught could *not* be solved.  I got very interested in how this could be and spent 3 years reading and rereading over 3000 pages of pure mathematics in order to understand how this could be.  It's the coolest applied math in 80 years.  But I'd been doing it for 6-9 months before I started on the "how can this be" problem.  My degrees are in English lit and geology.  I did learn a lot more math trying to get a PhD, but this is *way* beyond anything I ever studied.

The beauty of it is all you need to know is how to create the A matrix.  For your application you only have to do that once unless you discover other failure cases you want to check for. In which case you just add them to the A matrix.  If the 4 motors run at different speeds it will tell you which gear trains are good and which are bad.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #25 on: January 16, 2019, 01:20:21 pm »
How much tolerance do you have for serious mathematics?  You don't need to actually learn the details anymore than you need to learn the details to do an FFT, but you will need to not run away in fear.


I have decent mathematical background, however, I hope to use a method that's more easily explained to people. A black box approach will not help this get adopted.

Can we use some more traditional machine learning method like regression, support vector machine etc, or decision tree? Just need a handful of good attributes to use. I don't have a ton of data (sample motors) at this point anyway.

Thanks
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #26 on: January 16, 2019, 03:22:29 pm »
This is work done in 2004.  It's also very easy to explain accurately in a non-mathematical manner.  I have no idea what "deep learning" is.  A friend has been doing a lot of work with support vector machines, but personally I know nothing about them.  I could go into a long essay about the problems with neural nets, etc, but it's really pointless.

The following description assumes that you can run the motors individually at different speeds during the test. If that's not the case then you'll need to omit the part about identification of the bad motor in the assembly.

"We have collected sample recordings of good motors and bad motors and put them in a  dictionary of motor recordings.  As we identify new quality control issues we add them to the dictionary.

When a motor is tested, the 4 sample recordings which best match the motor being tested are selected.  If any of those samples was taken from a defective motor, the assembly is diverted for rework. along with the identification of which motor in the assembly is defective."

That's an easy concept to grasp and is an accurate description.  The mathematical proof that the result is correct is the part that's hard.

Read the introduction to this paper.  It's an excellent introduction to the field.

https://statistics.stanford.edu/research/most-large-underdetermined-systems-linear-equations-minimal-l1-norm-solution-also-sparsest
 

Online MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: sound analyzer for automating quality checks?
« Reply #27 on: January 16, 2019, 03:44:15 pm »
Reminds me "voice recognition" project, I've done once with arduino AtMega32u4. Not in any way an expert in the field, my thoughts were simple, having  FFT "waterfall" - picture, so all bla-bla math boils down to pattern recognition in the image.
 
There is excellent book: http://101science.com/dsp.htm

Quote
We will demonstrate FFT convolution with an example, an algorithm to locate
a predetermined pattern in an image.
http://www.dspguide.com/ch24/6.htm

Arduino easily recognized single word commands (memory limits put constrain 1 sec.), so I had a difficulty  to pronounce the same word twice with 80% match.
  There was no problem if my computer says "Speaker Test" twice - result was better than 96%.
« Last Edit: January 16, 2019, 03:46:30 pm by MasterT »
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #28 on: January 16, 2019, 04:25:52 pm »

Quote
We will demonstrate FFT convolution with an example, an algorithm to locate
a predetermined pattern in an image.
http://www.dspguide.com/ch24/6.htm


I had some concerns when I read that, but a quick skim of the link showed that it was doing things properly.  That is the 1940's Norbert Wiener approach.  Still perfectly valid and useful, but not as powerful as a sparse L1 pursuit.  The reason being that it's L2 (least squared error).  But until recently L2 was all one could afford computationally and even that was often a strain on a VAX 11/780.  An L2 solution smears the result which L1 does not do. 

However, you can get close to an L1 using reweighted least squares or using singular value decomposition and truncating the eigenspectrum of a Karhunen-Loeve Transform (KLT).  The latter was my tool of choice for problems like this until 2013 when I learned of the work by Donoho and Candes.

What brought their work to my attention was when I realized that basis pursuit following the description in  Mallat's 3rd ed was doing things I *knew* based on many years of using the SVD-KLT approach were impossible.  As I had almost 30 years experience with SVD-KLT, that *really* got my attention.  SVD-KLT is very powerful in good hands.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #29 on: January 17, 2019, 12:55:28 am »
Thanks, you've been very helpful. I made lots of progress on this.

I guess another quick question is that I used a sample rate of 44100 hz, but each frame is 1024 (or 2048 in another version). This means my frequency resolution will not be a integer. Is that a problem? Should I use a sample rate that is a multiple of the frame size?
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #30 on: January 17, 2019, 02:19:10 am »
Makes no difference at all.  The short time series smears the spectrum quite a lot.

I'm very pleased and impressed with the progress you've made.  This is pretty standard stuff, but it's not trivial and as you now know is a significant amount of work.  Your employer is lucky to have you.

As you are automating what is currently a manual procedure I'd like to suggest that you start out using a graph of the DUT spectrum with limit lines or a simple waterfall display on a monitor in portrait mode showing the spectra for each device.  Have a human make the send to rework decision.  Get some experience with that before implementing  SVD-KL or basis pursuit.

My reason for suggesting this is the graphical displays will make it easy for everyone to understand what you are doing when you automate the last step.

 

Online MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: sound analyzer for automating quality checks?
« Reply #31 on: January 17, 2019, 02:43:45 am »

Quote
We will demonstrate FFT convolution with an example, an algorithm to locate
a predetermined pattern in an image.
http://www.dspguide.com/ch24/6.htm


I had some concerns when I read that, but a quick skim of the link showed that it was doing things properly.  That is the 1940's Norbert Wiener approach.  Still perfectly valid and useful, but not as powerful as a sparse L1 pursuit.  The reason being that it's L2 (least squared error).  But until recently L2 was all one could afford computationally and even that was often a strain on a VAX 11/780.  An L2 solution smears the result which L1 does not do. 

However, you can get close to an L1 using reweighted least squares or using singular value decomposition and truncating the eigenspectrum of a Karhunen-Loeve Transform (KLT).  The latter was my tool of choice for problems like this until 2013 when I learned of the work by Donoho and Candes.

What brought their work to my attention was when I realized that basis pursuit following the description in  Mallat's 3rd ed was doing things I *knew* based on many years of using the SVD-KLT approach were impossible.  As I had almost 30 years experience with SVD-KLT, that *really* got my attention.  SVD-KLT is very powerful in good hands.

After reading this vocabulary, I understand why "Mad Cow" disease outbreaks happened.
Lucky that mathematics was invented long before patent layers were born.
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #32 on: January 17, 2019, 04:28:06 am »
It's far worse than you can imagine.  The same mathematics appears with a dozen names in the literature.  I've lost count of how many times I've spent several days investigating some "new" algorithm only to find that it was just someone reinventing the wheel.

I go to considerable trouble to try to restrict myself to what appears to be the mainstream lexicon.  But if you span lots of disciplines as I do, it eventually makes you crazy.  What I wrote is actually *very* generic.  But I feel obliged to define things like L0, L1 and L2 norms so that non-mathematicians have some idea of what I'm saying.  I hate it when someone uses a dozen acronyms without defining them.

If you think this was bad, look at  "A Mathematical Introduction to Compressive Sensing" by Foucart and Rauhut.  It was written for mathematicians and the jargon is incredibly opaque.  "A Wavelet Tour of Signal Processing" by Mallat is almost as bad.

I'm 65.  My undergraduate degree was in English literature.  My MS was in geology.  So the fact that I spent 3 years reading F&R twice and Mallat once plus about 1500 pages of original papers in mathematics rather boggles my mind.  But it was a lot of fun.  I just wish I could find someone else that I could talk to about it.
 

Online MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: sound analyzer for automating quality checks?
« Reply #33 on: January 17, 2019, 04:55:39 am »
I always thought, that man should be proud of what he had done, or what he invented.
Not for what he  had read, and able to "parrot"-ed on a forum, where nobody would understand what he is talking about.
No offence mean, just non native eng. person with post brain wash traumatic disorder.
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #34 on: January 17, 2019, 02:35:21 pm »
Learning is the most important accomplishment in life.  I read 3000+ pages over 3 years and I can explain what it is, why it works and how to do it in simple English.  F&R proved so difficult  on first reading I had to read Mallat and then reread F&R.  F&R is 600 pages; Mallat is 800. But even after that it was still unclear.  It was not until I read the original papers by Candes, Donoho and their students and made a foray into the geometry of N dimensional space that I reached the point I understood.  I worked as a research scientist/programmer in the oil industry with some of the very best people in the field.

Nothing I have done compares with the difficulty of learning what I refer to as  "sparse L1 pursuits".

Application include:

compressive sensing (single pixel camera and MRI video)
passive radar (locate other airplanes using only the ambient RF field so that you do not reveal your presence)
matrix completion (the Netflix prize solution)
blind source separation (isolate  any speaker at a cocktail party with a few microphones randomly placed)
statistics and machine learning (far too many buzz words to list)
error correction (detect signals below the noise floor)
genetics (identify the alleles in DNA that cause an inherited trait)

There are more, but they involve really exotic mathematics and are primarily of interest to mathematicians rather than engineers and scientists.

My references to Wiener, SVD-KL, etc  were for the sake of tying to the formal education in mathematics which most engineers receive, albeit without sufficient exercise to fully comprehend until graduate school.  That is all basic DSP.   The attached figure is why sparse L1 pursuits are so different and important.  The explanation will mean nothing if you do not understand Shannon-Nyquist sampling theory.

Upper left figure is the Fourier spectrum of an arbitrary waveform.  Below it is the time domain waveform with 16 samples randomly selected from the 64 samples.  The plot is drawn with the usual sin(x)/x interpolation between points as done on a DSO.

At the top right is the result of attempting to recover the Fourier spectrum from 16 samples using an L2 norm inverse.  The bottom right is the result of doing the same thing except using an L1 norm.  In both cases Ax=y is being inverted to recover the Fourier coefficients in x.  The FFT solves the same problem via L2 under Shannon-Nyquist sampling constraints.  Shannon still applies in the L1 solution.  Zeros don't convey information.

That is the biggest advance in signal processing since Wiener's work in the 1940's.  All DSP that does not have "wavelet" attached to it is based on Wiener's work.  That's most of the DSP I've seen done in 37 years of doing it, mostly in major oil company research and technical services departments.
 

Online MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: sound analyzer for automating quality checks?
« Reply #35 on: January 17, 2019, 05:15:27 pm »
KGB used the books to "program" a human-beans into "android-alike" robots. 
Than NLP - Neuro-Linguistic Programming become very popular.
Some poor guys never realize they were robots all theirs miserable lives.

I'm saying this to warn "not to read" any crap you may find in the library or on the internet. It may and will be used against you.
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #36 on: January 17, 2019, 06:19:20 pm »
Humans have been programmed since they first appeared on the planet.  It's called child rearing.  Unfortunately most of the parents of the 20 something generation didn't actually do that they left it to Mr. Rogers, Sesame Street and the rest of the crap on TV.

Most people are robots but don't realize it and would vehemently deny it.

None of this has anything to do with this thread or with electronics or technology in general.  So please take it elsewhere.
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #37 on: January 17, 2019, 07:22:38 pm »
I had an interesting insight into the ambient noise problem when I went to bed last night.  I don't know why, but this is a common occurrence. I think it's because I take my brain out of gear and just let it idle.

There are three sensible approaches to the problem:

1)  Subtract an average ambient noise spectrum from the result.  That works OK if the noise characteristics are time invariant.

2)  Use  number of ambient noise spectra in an SVD-KL decomposition.  That will work quite well, but doesn't provide very good separation between device noise and ambient noise.

3) Use a large number of ambient noise spectra collected over several days and place them in the dictionary for a basis pursuit.  Increase the allowed number of non-zero coefficients in x from 4 to 5, the 4 gear trains and the ambient noise.

The last of these is sufficiently advanced to qualify as an MS thesis topic anywhere.  Probably not quite PhD dissertation topic grade in mathematics at a first rank school, but close.  And very likely it would qualify at Stanford levels for a PhD in Industrial Engineering and statistical process control.

The fundamental L0 problem is NP-hard.  It's the classic combinatorial problem.  The optimal answer requires evaluating the residual error for every cobination of 5 vectors drawn from a set of 10,000 or more. The really big deal about the L0-L1 equivalence paper by Donoho that I cited earlier is that Donoho proved that *if and only if* you get a sparse result using an L1 norm, it is the L0 norm optimal result.  He has also shown by arguments from convexity in N dimensional space that you are overwhelmingly likely to get an answer. The GLPK solver can make the selection of vectors from a set of 50,000 in L1 time,  though not in 1 second.

However, Donoho later presented a solution technique drawn from the theory of regular polytopes in N dimensional space which I *think* is trivially parallel.  And work on the MRI video problem has led to other work on fast solutions.  I've not read any of the literature published since 2015-2016, so I don't know the current state of the art.
 

Online MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: sound analyzer for automating quality checks?
« Reply #38 on: January 17, 2019, 09:28:58 pm »
None of this has anything to do with this thread or with electronics or technology in general.  So please take it elsewhere.

I'm perfectly aware of censorship that each my post goes through,  and don't feel a joy to talk with AI or retarded. So really don't care about the membership.

But I did a quick research on the obsession someone has with a book. Using search box in the upper right corner of this page:
1. Word "Donoho" was referenced 23 times by he same poster.
2. Word "Mallat" --//--     11 times just for last year.

Is it right time  to visit a doctor?
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #39 on: January 17, 2019, 10:46:13 pm »
None of this has anything to do with this thread or with electronics or technology in general.  So please take it elsewhere.

I'm perfectly aware of censorship that each my post goes through,  and don't feel a joy to talk with AI or retarded. So really don't care about the membership.

But I did a quick research on the obsession someone has with a book. Using search box in the upper right corner of this page:
1. Word "Donoho" was referenced 23 times by he same poster.
2. Word "Mallat" --//--     11 times just for last year.

Is it right time  to visit a doctor?

Absolutely, you should make an appointment with a dementia specialist as soon as possible.
 

Online MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: sound analyzer for automating quality checks?
« Reply #40 on: January 17, 2019, 11:19:13 pm »
I did with a psychiatrist, they  told me it's " schizo paranoia ". 
This liberates me to disclose any confidential files, that well above your clearance, Parrot.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #41 on: January 23, 2019, 06:09:57 pm »

There are three sensible approaches to the problem:

1)  Subtract an average ambient noise spectrum from the result.  That works OK if the noise characteristics are time invariant.


Thanks, this is what I was thinking about as well, will give it a try.

The test station is in an enclosure where it's mostly isolated from outside noise. However, inside the same enclosure, there is another mechanism down the line that creates noise. However, the noise for the other mechanism is predictable and is "in sync" with my recording interval. So for example, after 1 second into my recording, the same noise (mechanism) will occur. I guess averaging the noise FFT across the recording length and subtracting it should work decent in this case right?

As a 2nd question, when we think of "decibels" we think in terms of loudness of sound without regard to frequencies. So when they said a jet engine is at certain decibel, they are not breaking it down to frequencies. So what is a correct way to interpret decibel? The total power of the FFT spectrum added together? Are there any device that can measure the "decibel" of a sound and output a single number?

Thanks
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #42 on: January 23, 2019, 10:15:24 pm »
Sound level meters have weighting functions which are designed to match the sensitivity of the human ear.  For example, "A" weighting is a common choice.  so the single number is a weighted average.  That's not appropriate for this problem.

In your case taking the log of the amplitude at each frequency will prevent a few strong frequencies from using up all your dynamic range.

You'll have to experiment at this point.  I'd start by taking a long recording of the noise, breaking it up into say 1000 segments the same length as you are using to check the motors and use that.  While you're at it, in addition to the mean at each frequency, compute the standard deviation.

To start with I would not do the subtraction.  I'd plot the mean and upper and lower 1st and 2nd sigma for the noise data and the same for a long recording of a "golden" reference motor assembly with the noise present.

That way, if the noise goes outside the normal bounds, it will alert the operator.  If a sample goes outside the 2nd sigma bounds on the golden reference you've got a problem motor.  I'd test this for a while in parallel with the existing line operation to get a feel for the issues that might arise.  While this is not very sophisticated relative to the other solutions I mentioned, it's also not trivial.  You are very likely to get some surprises.  Once you have data and can make plots with gnuplot, please post them so I can look at them,

I don't know your age or circumstances, so I can't say whether doing an MS or PhD in industrial engineering makes sense for you.  However, implementing the basis pursuit and submitting a  paper describing it in a professional society journal would become an important and highly cited paper even if someone has done something with basis pursuit already.  Even without additional educational credentials, that means more job opportunities and money.

The basis pursuit is no more difficult than what you have already done.  It's a little more work as there's new software to learn, but I can help you with that by supplying some example cases to play with.  Once you've got it set up it would only require modification if the noise environment changed or the assembly design changed.

I don't recall if I mentioned it in this thread, but I'm a retired oil industry research scientist/programmer.   I worked for three majors, a super major and two large independents. 

As a contractor I routinely attended industry consortia on behalf of the client at Stanford and other top schools in my field.  So I was grilling students and professors on the work they were presenting.  It's *very* unusual for a contractor to do that.  An all expense paid trip to Palo Alto or the Stanley Hotel in Estes Park (where "The Shining" was filmed) is rather a plum assignment.  But I was the last person left after a couple of rounds of "right sizing" who knew enough to do it.  Generally I knew about 1/2 the attendees either from working with them at other oil companies or from the annual professional society meetings. 

I think it worth noting I did not gt my PhD, personality conflicts with my supervisor led to losing my financial support after 4 years.  So I'd have had to go to Stanford and spend another 6 years.  Losing another 6 years of income living on a grad student stipend was simply too costly.  It did prevent my getting jobs where they wanted a PhD to impress the customer, but otherwise had no effect on my earnings or status at work.  Most people assumed I had it and were very surprised when i said I didn't.  At the PhD level, normal introductory small talk includes inquiring where and under whom someone took their degree.  In the case of major consortia such as the Stanford Exploration Project founded  by Jon Claerbout and now run by his student Biondo Biondi which has run for 45 years, they will also ask when.  That tells them who your classmates were and the work with which you are familiar.

I'm not an industrial engineer, so I can't say how much attention sparse L1 pursuits have attracted in that field, but there are two active research consortia in geophysics entirely devoted to the subject.  One at U of BC in Vancouver led by Felix Hermann and the other at Alberta  led by Mauricio Sacchi.

If you're not familiar with industrial research consortia, these are organizations that the big name professors use to raise money to fund their graduate students.  Typical fee is $35-55K/yr.  For this you get access to the research a couple of years before non-members.  In many cases the software is only available to members even after 5-10 years.  You also get access to the students and if you pick up the phone and call the professor, he takes your call.  He'll also come do a day long short course if requested.

I took a quick look at Mallat, but I don't think I could post a long enough scan to be useful.  But I have posted a figure from "A Mathematical Introduction to Compressive Sensing" by Foucart and Rauhut.

The upper left shows  the amplitude of the Fourier coefficients for the time domain trace show below it.  From the 64 points in the inverse transform of the upper left, 16 were chosen at random.    Only the points chosen are marked and a sin(x)/x interpolator has been applied to the 64 samples generated by doing the inverse transform of the FFT in the upper left.

The upper right is the result of attempting to recover the amplitude coefficients from the 16 samples shown in the lower left using an L2 solution of Ax=y.  The lower right is the result of solving the same problem using an L1 norm instead.  The Nyquist criterion would require all 64 samples to recover the amplitudes using L2.  But the L1 case only Shannon applies.  A sine wave can be fully described by 3 samples.  Shannon showed that  we *must* have at least 15 samples to convey the information.  In the case of a sparse L1 pursuit the bound is a little higher and 16 samples are needed.  But that is 1/4th of the number of samples that the Nyquist criterion requires.  So that represents a substantial reduction in the time required to acquire the data.

Mallat also treats the problem of removing additive noise, which is why it's too big to post a scan.

On the surface this is very different from your problem, but the underlying mathematics are like a magician's bag which changes color every time he turns it inside out.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #43 on: January 24, 2019, 03:08:37 am »


I don't know your age or circumstances, so I can't say whether doing an MS or PhD in industrial engineering makes sense for you.  However, implementing the basis pursuit and submitting a  paper describing it in a professional society journal would become an important and highly cited paper even if someone has done something with basis pursuit already.  Even without additional educational credentials, that means more job opportunities and money.


I got a Masters in Computer Engineering at UIUC and I've decided a Phd is not right for me a few years ago.  I am curious and want to be a life long learner. I will explore the basis pursuit later, but for now, I'd like to start with something simple...they do want a solution ASAP because right now the quality checks are manually done.

I already ordered a few different microphones. Mind you, not just any mics, but cardioid mics, which hopefully can help filter out ambient noise from the sides...

I remember you mentioned piezo sensors. That's also something I want to explore, but I got no experience with them. I think with piezo I can possibly measure the vibrations directly thru contact and this would make ambient noise irrelevant. As I said before, the production line already has a station where a cylinder/actuator will turn on the device by pressing a button, and perform a function check (make sure things are spinning, etc...) The button won't be released until the test is over. I wonder if it's possible to attach a piezo sensor to the head of the actuator to measure the vibration. However, there will be variations to the "press force" due to variations in placements so I wonder if this will ruin my results.

Just want to try multiple solutions in parallel.

Thanks
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #44 on: January 24, 2019, 01:20:27 pm »
Getting a  PhD is expensive and unless you have to have the union card not worth while.  I only went back because I had *no* training in seismology at all.  I'd been hired into a job because I had a degree in geology and had taken Diff EQ.  But I thought DSP was super cool and I wanted better than I could do on my own with a stack of books.  When I was hired I was promised several months of training in Tulsa where Amoco had their labs, but I worked for 18 months before I got my only 2 week course.  Meanwhile I had to do the stuff we are talking about and more.  The only thing that saved me was my boss had an MSEE and I had a ham license.  He had come from the labs where he wrote a lot of the DSP codes.  So he could translate geophysics speak into radio speak.  I also knew optics very well and so all my knowledge of the wave equation was very important.

The thing that *is* important about the PhD is acquiring the ability to master a subject for which you have no prior training.  I'm immensely proud of having been able to learn sparse L1 pursuits on my own.  I won't claim to have mastered it because I've not found anyone to test my knowledge against except at a fairly superficial level.  But I do understand it well enough to know that most of the 3000 pages I read are irrelevant to actually applying it.  Most of it is just the logical proof that it works and why.  There is a similar amount of verbiage that was developed to prove that the Fourier transform worked.  As with Heaviside's work, it took the mathematicians a lot of time to develop the logical justification.

Get a couple of these to play with (just in case you break one):

https://www.stewmac.com/Pickups_and_Electronics/Pickups/Violin_Pickups/Schatten_Soundboard_Transducer.html

They are very fragile, so epoxy them to a thin,  ceramic disk magnet the same diameter or slightly larger.  The neodymium magnets would be too strong.  The construction is a thin brass sheet with a pizeo sensor bonded to it and foam to reduce feedback on the top.

One possibility would be to remove the foam and cast an epoxy case with an eyelet.  An actuator could lower the  unit on a string until the magnet grabbed.  Leave the string slack while running the test and then pull it away.

They are made as light as possible so they don't dampen the guitar top.  That doesn't matter in your case.    That is specific to the application on an acoustic guitar.  There are sensors with broader response, but the  prices start going up quickly.  These are cheap enough to play with.

I glued a small spruce disk to the sensor for reinforcement per factory instructions.  They supply butyl tape to attach them and if you try to move one without the spruce reinforcement it will break.

The first photo shows the experimental setup.  I took a scrap tweeter with a busted cone, stripped off the remains and glued a spruce disk to the end of the cone.  Then I swept it with my 33622A from 20 Hz to 20 KHz.  The first scope shot shows the input and the output over the full range and the 2nd up to about 8  KHz.  The amplitude variation of the input is presumably just mismatch between the 33622A and the inductive load. As the reactance rises, the voltage differential across the terminals should go up.  At the time I didn't realize that the 33622A had a high impedance output option so this was a 50 ohm source resistance and the apparent ramp is just the voltage divider effect of the load.

One of the applications of sparse L1 pursuits is "blind source separation".  Or in simple terms, with a few microphones scattered around a crowded cocktail party, isolate any speaker in the room. It's all a question of setting up the proper A matrix for the problem.  I mention that because with a pair of microphones there is a potential to diagnose the exact fault location and thus speed up the rework and collect SPC data.

A box made up with sides in the form of drywall - soft foam - acoustic tile with the drywall on the outside and doors at each end that ran in rails vertically would reduce ambient noise a lot and be very amenable to full automation on a fast moving assembly line.

Clearly what is needed is a "good enough" solution ASAP.  You're already very close to that.  The major hurdle of the fundamental mathematics is done.  So now it's a question of engineering an implementation which meshes well with the production process.  Edison demonstrated how you do that.  You try a lot of possible solutions to a problem.

Have fun and show me some pictures.
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #45 on: January 26, 2019, 10:51:43 pm »

There are three sensible approaches to the problem:

1)  Subtract an average ambient noise spectrum from the result.  That works OK if the noise characteristics are time invariant.


Thanks, this is what I was thinking about as well, will give it a try.


1) is required.  The others are optional.  But to use them you need to have examples of bad assemblies. So you have to do 1) first to use either.  The best way to discover anomalies is for a human to look at the data and mark them.  The computer can take it from there with ease after a little programming.

It's really worth implementing SVD-KL and basis pursuit and comparing the results.  This is especially true with noisy data. I'm quite confident that the improvement in accuracy and resolution from step to step will be dramatic.
 

Offline engineheatTopic starter

  • Frequent Contributor
  • **
  • Posts: 267
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #46 on: January 29, 2019, 01:50:46 pm »
Thanks.

I'm sure even after I implement that, they will ask me to prove that it is doing what it should. For example, they might wonder if the "microphone is calibrated." What can I do to prove to them that the system is robust?

Are there any good samples of sound with known frequency that I can demonstrate that the FFT is outputting the right result. (I tested already on Youtube videos of sounds but they might want to be more rigorous than that...)

Also, the magnitudes of the FFT depends on sound card, the format of the data, and I guess also the microphone used. Is there even a way to calibrate all that and does it even make sense to do so? Because I can totally imagine they want to know where those magnitude numbers come from.

Thanks
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #47 on: January 29, 2019, 08:53:25 pm »
 I posted the algebra for sound card errors previously.   Correcting the sound card errors is more exercise in gratuitous accuracy than substantive.  Mostly good for impressing management that you've been very  thorough,  But you might find some real surprises.
   
Calibrating microphones is expensive and difficult.  Also probably not really needed. To "normalize" the microphones, make a soundproof box and use a spark pug as the source.  Traditional reference was lead shot poured onto a sheet of steel.  Record with each mike in all positions.  Then the math is a simple variant of the card algebra.

BTW Any acoustical enclosure should avoid having any flat or parallel surfaces.  Moisten 1/4" gypsum board to make it flexible and apply 2 layers with the seams at 90.  This is to prevent resonance between the walls from saturating the pickup.

The canonical tests for an FFT are a pure real sequence of ones. The transform is a spike at T0.  The corollary is unit spikes at  T0. TN/2 and Tn.  This should all have unit modulus and the phase angle should go from zero t0 two Pi.  Any deviation needs to be investigated.  These test for an "off by one" in the array indexing of the FFT which is by far the most common error.  The other test is to do the opposite.

The addition in the transform leads to the scaling issue.  I like 1/sqrt(N)  as that means the magnitude of the values stays the same.  There are 6 permutations off the FFT, three scalings and the sign of the exponent.  All are in use by some workers, so you have to watch out for that.

My professional work has always been the opposite.  My job has been to find the anomaly.  That's well defined physically,   However, the noise is another matter.  The best seismic  processors can recover clear signal from data which are so noisy that there is no signal visible at all.

First reference is the masterpiece of classical Wiener-Shannon-Nyquist analysis.  There is a detailed an explanation of everything you want to do there.  So I can explain things simply by saying read section n.p. I started using this 30 years ago in the 2nd edition.  I have pretty much all the DSP classic texts.  It's 10-12 ft of shelving going back to the original publications by Wiener and Shannon.  None of them compare in terms of breadth of coverage of issues of real world importance as B&P.  Alan Piersol passed away, so the 4th is the last edition.  B&P summarizes the mathematics from Fourier in 1820 to Wiener et al in 1940 with detailed derivations for common practical cases. So if you can read the math you can write the code.  There are caveats though as with everything.  Read closely and check the citations.

There is a *lot* of mathematics to learn to implement your use case properly.  This is the first step which must be mastered.  Much of it should be familiar to varying degrees with varying terminology. 

Random Data
Bendat & Piersol
4th ed

The 2nd book is Mallat.  It's an essential bridge into the mathematics of modern DSP.  I had it for years in  a couple of editions, but only used it as a reference,  I never read it through. Just looked at the pictures which are very cool.  Foucart & Rauhut made me realize I needed to fill a gap in my knowledge so I read Mallat which resolved the matter.   For years I just used the back part of Mallat and skipped the wavelet discussion.  But wavelet mathematics are important to sparse L1 pursuits in general.

A Wavelet Tour of Signal Processing
Stephane Mallat
3rd ed

The discussion in Mallat of sparse L1 pursuits is quite adequate for actual practice in a wide range of applications. As it is 10 years old, a new edition should be out soon.

The next thing to be done is collect data on motors, good and bad.  As current screening is manual, put up a waterfall display of the the last N tests for the manual tester to classify as good or bad. Put that monitor in portrait mode.  Also present mean, median, mode and standard deviation A(f) plots on a landscape oriented monitor.  Record all the data and then start running analyses per B&P.  But involve the current testers and the rework staff.  The latter  should repeat the test in a quieter setting, rework the motor noting what was done. and record the motor after rework.

That data is what validates the process.  The people who have been doing it say it works.  Move them to the another bottleneck and repeat.  It's important that success bring them better jobs. 

For a test to be useful it needs to last for several rotations of the slowest shaft.  Testing for shorter periods just means some things are not tested.

It's important to note that all the gear work driven by the motor has a harmonic or sub-harmonic relationship to the motor speed.  A good assembly will have a tone at the shaft speed and that times the number of teeth on each gear spinning at that speed.  Other patterns present will indicate the presence of a variation in the gear contact.

FWIW  So far as I can tell sparse L1 pursuits are at the heart of "deep learning" which in turn is just "neural nets" which made the great innovation of  solving an unknown equation the program picked out, done over.  You just hope it has something to do with what you wanted.   What can I say?  I've been to a lot of dog and pony shows.
 

Online rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: sound analyzer for automating quality checks?
« Reply #48 on: February 10, 2019, 12:30:58 am »
I've been contemplating the problem. Both SVD-KL and sparse L1 pursuits require models of good and bad devices.  Those are easy to accumulate over time, but what to do to start?

Get some golden samples of good gearmotors,  count the teeth on all the shafts so you know what multiples of the shaft rotations there will be spectral peaks, make a long recording so that you can average 10,000 FFTs.  Do this in a quiet area and identify all the spectral peaks.

In the line setting, record the ambient noise, run the motor and record it and the noise.  Difference the two recordings and then examine  the residual difference  from the reference recording.

You need to make sure the recordings have multiple full revolutions of all shafts so that all the teeth get checked.

My general thinking is that if you make several ambient noise recordings with gearmotor recordings in between, defective units should be apparent from looking at the residual of the gearmotor - ambient noise - golden reference  averaged spectra. and that simply summing the residual would give you a good initial pass/fail.  In any case, rework should have a well isolated sound chamber to record motors before and after rework to collect the data for more sophisticated models.  You might well find that there are only a few bad unit spectra.

 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf