Electronics > Projects, Designs, and Technical Stuff

2D Imaging Monochromator with Webcam

(1/1)

Ben321:
A typical monochormator has an entry slit, a rotatable reflective diffraction grating, a concave mirror, and an exit slit. Light enters through the first slit, is split into a spectrum and focused with the concave mirror and rotatable reflective diffraction grating a, and a specific spectral line exits through the second slit. The spectral line that exits is selected by turning a knob that rotates the rotatable reflective diffraction grating. Because the input and output apertures are vertical slits, not a pinholes, this device doesn't produce just one point of light (which would allow one to analyze the intensity of various wavelengths at only one point on an illuminated object), but rather an entire vertical line (which lets you analyze an entire line of of points on an object at various wavelengths).
Here's a diagram for Edmund Optics monochromator. https://productimages.edmundoptics.com/3467.gif

However, that typical monochromator, only forms a single line of an image at a selected wavelength. It doesn't form a complete 2D image at a selected wavelength. I did online research to see if any device existed that would produce a complete 2D image at a selected wavelength (which could be adjusted by turning a knob), and found no such device existed. That's when I decided to make one myself, using only a cheap webcam, and cheap plastic spectroscope. The total cost of the hardware (which I already had prior to this) cost about $15 (not the price of over a $1000 you'd expect for even a typical 1D monochromator, like the Edmund Optics one). https://www.edmundoptics.com/f/Manual-Mini-Chrom-Monochromators/11508/

My 2D imaging monochromator also eliminates the need for mechanical parts and a rotatable diffraction grating to select the wavelength. Instead, a full spectrum image of the slit is recorded for each video frame, and the wavelength is selected based on the x coordinate of that wavelength in the video frame. Since the x coordinate of the video frame represents the wavelength, how does one get the actual horizontal position of an image? Well, it's simple. It uses the frame number for the horizontal position of the image. You see, as video is being recorded, the entire apparatus is rotated slowly in the horizontal plane, so as to scan the object that is to be imaged. Horizontal position on the target object is the frame number in the video, vertical position on the target is the y coordinate in a frame, and the wavelength is taken from the x coordinate in the frame.

In my case, since I don't have a rotating motorized stage to mount my device on. I just slowly rotate it with my own hands, as I hold the device. Also, the webcam is not permanently attached to the plastic spectroscope in any way, so I have to hold them together with my hands. This is the "poor man's" way of making what would otherwise be a VERY expensive piece of equipment. And surprizingly, it actually worked quite well. See attachment "experimental setup.png" for the diagram.

For my test of this setup, I scanned the light fixture in my ceiling. Each frame from the webcam is 320x240, but due to the fact that the vertical slit is so short, the actual height that can be seen of the target through this slit is only 17 pixels. In my test of this setup, I recorded 290 frames, so the width of the image at any given wavelength is 290 pixels (output image size is 290x17). Of course, this required me to write my own software, to generate a new image, using the input frame number as the x coordinate of the output image. I used VB6 (Visual Basic 6) to write this software. Not the newest programming language, but it works for all my needs. Because my spectroscope has a scale in it, I was able to calibrate my software so that a specific x coordinate in an input video frame could represent an exact wavelength. I used the free and opensource software FFMPEG to record video from my webcam, and to split the video file into individual BMP (bitmap) image files, which were used as input to my custom software.

I also had to make a decision, focus my webcam's manual focus lens on the slit in the spectroscope, or on the object that I'm imaging. Focusing on the object would produce better resolution in the vertical resolution of the object itself (avoid producing vertical blurring on the object), while focusing on the slit would increase the resolution of the spectrum (avoid blurring the spectral lines). I decided to focus on the object that was to be imaged. You can see a sample video frame that I've annotated, by looking the attachment "annotated spectrum video frame.png".

One problem I noticed is that the spectrum gets washed out whenever a part of the image is too bright (as when looking directly at the ceiling light fixture in my room as a test light source). This can be seen in the attachment "annotated spectrum video frame (with blooming).png". You will see that light above 700nm (NIR wavelengths) can be seen here, and they appear pinkish in color. This is because I had previously removed the IR blocking filter from the webcam, to let it see NIR wavelengths. And yes, my ceiling uses LED bulbs, but even those (even though they are very efficient visible light emitters) still emit some light in the NIR part of the spectrum.

I generated 4 output images, each 100nm from each other, and each corresponding to a primary color (R,G,B) and also one NIR. These are at approximate wavelengths of 450, 550, 650, and 750 nanometers. The exact wavelengths, based on calibration of my software, using the spectroscope's scale, are 451, 549, 651, and 749 nanometers. Note that the wavelength-to-pixel ratio (based on the size of the spectrum in a video frame) is greater than 1, so each pixel I move to the right in a video frame, I increase the wavelength by more than 1nm. Therefore, I could not get the exact wavelengths I wanted (450, 550, 650, and 750 nanometers). I have attached these images:
image451.png
image549.png
image651.png
image749.png

Note that while these are PNG images (for the purpose of uploading them here as attachments), both the images that went into my custom software, and those that were outputs from the software, were BMP images, as programs written in VB6 can't directly process PNG images without using external software libraries (DLLs), and I wrote this program quickly using pure VB6 code. I used XnView to convert the BMP images to PNG images, for posting online.

I then used the free software Image Analyzer to convert these to grayscale. I did this by extracting the the blue channel from the 451nm image, the green channel from the 549nm image, and the red channel from the 651nm and 749nm images. I then combined them to form a single white light image (attached to this post is the file called "R-G-B (651,549,451).png"), and another one that uses the 749nm image for the red channel (attached to this post is the file called "IR-G-B (749,549,451).png"). Interestingly enough, the blue image (451nm) looks quite violet (as if it was 405nm). I wonder if this is just a defect with the color profile used by the camera to generate RGB values from the raw values recorded by the image sensor?

Ben321:
I should mention this. I have seen hyperspectral cameras, which are similar (varying wavelength optical bandpass filter is placed over the CCD or CMOS imager chip) and these have each column of pixels representing a different wavelength. However such linear filters (filters who's central passed wavelength changes linearly across the surface of the filter) are very expensive, and thus such cameras (while they could be used in the same application as I have here) are very expensive. My setup here is much cheaper, and I'd recommend it to any hobbyist who wants to create such images.

mark03:
I know almost nothing about hyperspectral cameras, so take this with a grain of salt, but, I was told that they use a tomographic imaging principle:  First a cylindrical lens "integrates" the 2D image into a line, then a grating spreads that back out so you have one dimension wavelength, the other dimension integrated brightness.  Then you rotate the whole assembly and collect frames at various angles.  Apply the projection slice theorem (like a CT machine) and voila, 3D image of X by Y by wavelength.

Ben321:

--- Quote from: mark03 on April 03, 2019, 03:04:19 pm ---I know almost nothing about hyperspectral cameras, so take this with a grain of salt, but, I was told that they use a tomographic imaging principle:  First a cylindrical lens "integrates" the 2D image into a line, then a grating spreads that back out so you have one dimension wavelength, the other dimension integrated brightness.  Then you rotate the whole assembly and collect frames at various angles.  Apply the projection slice theorem (like a CT machine) and voila, 3D image of X by Y by wavelength.

--- End quote ---

Maybe some do that. However, the ones I saw do it by having a filter cover the CCD chip itself. It's an optical bandpass filter who's pass-wavelength varies in one direction. So each column of pixels represents a different wavelength of light.

Navigation

[0] Message Index

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod