General > General Technical Chat
Why do CCD sensors use color filters instead of difracting the light?
ELS122:
It would seem using diffraction instead of a bayer mask would be way more efficient and lead to less noise, As well as allowing easier adaption for different wavelength ranges.
Only problem I see is the gap between the photo diode elements leading to gaps in the spectrum response. But I reckon you could mitigate that by using some sort of focusing element under the diffraction layer to focus onto the photodiodes.
Kleinstein:
Using difraction would need light from a defined direction. The dispersion in glass is not very strong and it would thus only work for a rather narrow angle. There is not that much wrong with filters.
At the higher end they used to have filters that are not absorbing but reflecting part of the light. This way they could split the light to 3 CCDs for the separate color parts. The optics and alignment are still expensive and large. So if not absolutely needed a single chip with filter pattern is perferred.
Someone:
--- Quote from: ELS122 on May 13, 2024, 09:42:55 am ---It would seem using diffraction instead of a bayer mask would be way more efficient and lead to less noise, As well as allowing easier adaption for different wavelength ranges.
--- End quote ---
In theory yes, its an emerging technology with early demonstrations:
https://doi.org/10.1186/s40580-023-00372-8
"Recent advancements of metalenses for functional imaging"
coppice:
Things like the Foveon X3 sensor seemed like the way to go for a while, as they have each of the three colour sensors stacked over the entire sensor panel, and so have some of the qualities of the 3 sensor + dichroic mirrors systems used in large professional cameras in a more compact form. Somehow they never seemed to work out commercially.
Berni:
Diffraction is too angle sensitive and difficult to implement right on the image sensor die.
Sure filters do throw away 2/3 of the light but it is not that bad, they made big advancements in making image sensors more sensitive in low light conditions. The big advantage of filters is that they are really easy to put on the pixels using existing chip manufacturing tech.
As said above, if you want to be efficient and not throw away light, it is easier to use a RGB beam splitting cube that directs the R G B components of light into 3 separate B/W image sensors. But due to the size of the optics and expense to align all of this means this is only really used in some big professional grade cameras. This method used to be really popular back in the day when cameras still used vidicon tubes as image sensors (They couldn't use a color filter)
Interestingly LCD monitors did find a way around this filter problem. The fancy new quantum dot LCDs replaces the classical RGB color filter with instead a red and green fluorescing material. This way instead of creating a white light backlight and filtering the white light down to RGB they instead can use a blue light backlight, then use fluorescence inside pixels to covert the blue light to red and green. This wastes much less light as a result.
Navigation
[0] Message Index
[#] Next page
Go to full version