General > General Technical Chat
Why do CCD sensors use color filters instead of difracting the light?
ELS122:
--- Quote from: Kleinstein on May 13, 2024, 10:05:02 am ---Using difraction would need light from a defined direction. The dispersion in glass is not very strong and it would thus only work for a rather narrow angle. There is not that much wrong with filters.
At the higher end they used to have filters that are not absorbing but reflecting part of the light. This way they could split the light to 3 CCDs for the separate color parts. The optics and alignment are still expensive and large. So if not absolutely needed a single chip with filter pattern is perferred.
--- End quote ---
The light falling on the CCD IS from a defined direction.
There's is a ton wrong with filters, mainly that they block out light, reducing captured photons, increasing noise.
I'm just suggesting where you have the 3CCD system but built within the same CCD.
ELS122:
--- Quote from: Someone on May 13, 2024, 10:22:31 am ---
--- Quote from: ELS122 on May 13, 2024, 09:42:55 am ---It would seem using diffraction instead of a bayer mask would be way more efficient and lead to less noise, As well as allowing easier adaption for different wavelength ranges.
--- End quote ---
In theory yes, its an emerging technology with early demonstrations:
https://doi.org/10.1186/s40580-023-00372-8
"Recent advancements of metalenses for functional imaging"
--- End quote ---
Thanks, that's a awesome paper!
Berni:
--- Quote from: ELS122 on May 13, 2024, 11:01:51 am ---The light falling on the CCD IS from a defined direction.
There's is a ton wrong with filters, mainly that they block out light, reducing captured photons, increasing noise.
--- End quote ---
In most cameras not so much. The lens does not just throw photons at the sensor perfectly parallel.
Light takes lots of different paths trough the lens and eventually gets focused down into sharp points on the image sensor. So light is coming from all of the cameras last lens elements aperture but traveling in directions that make it converge down at the same spot where it sums up into a bright spot. So you have light hitting the sensor at a range of angles (even when you consider just a single image sensor pixel). What is worse is that these angles might change depending on what lens you use, or even change with the same lens put into different configurations (when changing aperture, zoom, focus..etc)
The camera where you do get nice parallel light from a single direction is if you build a narrow angle zoomed in pinhole camera.
In such a pinhole camera you could likely create a color image sensor by taking a B/W image sensor and placing a diffraction grating and pixel mask a precise distance away from the sensor. This would split each pixel out into a rainbow that shines on the various pixels. However pinhole cameras are absolutely atrocious in low light conditions because they collect so little light trough their tiny aperture. So it is not that useful.(tho if you slowly sweeped the mask you could make a pretty cool megapixel spectrometer with it)
tooki:
--- Quote from: ELS122 on May 13, 2024, 11:01:51 am ---The light falling on the CCD IS from a defined direction.
--- End quote ---
Not to the precision likely needed for diffraction-based filtering. And even that only in a camera with a permanently attached fixed-focus, fixed-aperture, prime lens.
Any other kind of lens — focusable, zoom, with an iris — never mind a system camera with different lenses — and light enters at various angles depending on the situation.
Back when DSLRs were new, the sensors were more sensitive to angle than film (or modern camera sensors), which is why some “digital” lenses were released, whose optics projected the image onto the sensor at a narrower angle than equivalent film lenses. This applies specifically to wide-angle lenses, since telephoto lenses project at a very narrow angle anyway.
switchabl:
Even if you somehow solve the angle issue (say you create a new camera system where all the lenses have the exit pupil at a fixed distance and you manage to design a micro-lens array that properly collimates the light at every pixel location), there are more aspects to consider.
- Luther condition: ideally the sensitivity curves for each pixel colour should be linear combinations of the so called colour matching functions (more or less the colour response of a human observer). This ensures that sources with different spectra but the same colour result in the same RGB values. There is some room for compromise usually but just cutting the spectrum into three parts is far from ideal.
- crosstalk: in theory, any linear combination will allow you to calculate the correct RGB values. In practice, if there is too much overlap in the spectral response, the inversion becomes very sensitive to small changes. The result is that colour noise is amplified significantly. AFAIK this has been one of the problems with the Foveon design.
- efficiency: with gratings, light is diffracted into different diffraction orders (at different angles) but you can generally only use one. It is possible to concentrate 80%-90% into a single order at the design wavelength with blazed or VPH gratings. But at the edges of the visible spectrum that might well drop below 50%.
- cost, obviously
--- Quote from: Someone on May 13, 2024, 10:22:31 am ---
--- Quote from: ELS122 on May 13, 2024, 09:42:55 am ---It would seem using diffraction instead of a bayer mask would be way more efficient and lead to less noise, As well as allowing easier adaption for different wavelength ranges.
--- End quote ---
In theory yes, its an emerging technology with early demonstrations:
https://doi.org/10.1186/s40580-023-00372-8
"Recent advancements of metalenses for functional imaging"
--- End quote ---
Functional metalenses are a very interesting concept but as far as I can tell, we are still several scientific breakthroughs away from those being viable in practice.
Looking specifically at the colour router in section 4.1.2, all the results are with laser sources. That's no accident, the design is tailored to plane waves, normal incidence, circular polarisation and three specific wavelengths (and nothing in between). Making broadband metalenses that work across the whole visible spectrum is hard and currently comes at the cost of massively reduced efficiency.
I don't want too sound too negative, I believe there is a lot of potential there. But I think we might see the first real world applications in optical sensing or display technology where we have a lot more control over the light sources than in imaging.
In the medium term, the best chance to potentially improve on Bayer sensors is probably still Foveon. But unless someone invests a lot of money to really make a state-of-the-art version, we might never know.
Navigation
[0] Message Index
[*] Previous page
Go to full version