Author Topic: Why do CCD sensors use color filters instead of difracting the light?  (Read 453 times)

0 Members and 1 Guest are viewing this topic.

Offline ELS122Topic starter

  • Frequent Contributor
  • **
  • Posts: 931
  • Country: 00
It would seem using diffraction instead of a bayer mask would be way more efficient and lead to less noise, As well as allowing easier adaption for different wavelength ranges.
Only problem I see is the gap between the photo diode elements leading to gaps in the spectrum response. But I reckon you could mitigate that by using some sort of focusing element under the diffraction layer to focus onto the photodiodes.
« Last Edit: May 13, 2024, 09:45:46 am by ELS122 »
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14317
  • Country: de
Using difraction would need light from a defined direction.  The dispersion in glass is not very strong and it would thus only work for a rather narrow angle. There is not that much wrong with filters.

At the higher end they used to have filters that are not absorbing but reflecting part of the light. This way they could split the light to 3 CCDs for the separate color parts. The optics and alignment are still expensive and large. So if not absolutely needed a single chip with filter pattern is perferred.
 
The following users thanked this post: SeanB

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4551
  • Country: au
    • send complaints here
It would seem using diffraction instead of a bayer mask would be way more efficient and lead to less noise, As well as allowing easier adaption for different wavelength ranges.
In theory yes, its an emerging technology with early demonstrations:
https://doi.org/10.1186/s40580-023-00372-8
"Recent advancements of metalenses for functional imaging"
 
The following users thanked this post: ELS122

Online coppice

  • Super Contributor
  • ***
  • Posts: 8718
  • Country: gb
Things like the Foveon X3 sensor seemed like the way to go for a while, as they have each of the three colour sensors stacked over the entire sensor panel, and so have some of the qualities of the 3 sensor + dichroic mirrors systems used in large professional cameras in a more compact form. Somehow they never seemed to work out commercially.
 

Online Berni

  • Super Contributor
  • ***
  • Posts: 4974
  • Country: si
Diffraction is too angle sensitive and difficult to implement right on the image sensor die.

Sure filters do throw away 2/3 of the light but it is not that bad, they made big advancements in making image sensors more sensitive in low light conditions. The big advantage of filters is that they are really easy to put on the pixels using existing chip manufacturing tech.

As said above, if you want to be efficient and not throw away light, it is easier to use a RGB beam splitting cube that directs the R G B components of light into 3 separate B/W image sensors. But due to the size of the optics and expense to align all of this means this is only really used in some big professional grade cameras. This method used to be really popular back in the day when cameras still used vidicon tubes as image sensors (They couldn't use a color filter)

Interestingly LCD monitors did find a way around this filter problem. The fancy new quantum dot LCDs replaces the classical RGB color filter with instead a red and green fluorescing material. This way instead of creating a white light backlight and filtering the white light down to RGB they instead can use a blue light backlight, then use fluorescence inside pixels to covert the blue light to red and green. This wastes much less light as a result.
 

Offline ELS122Topic starter

  • Frequent Contributor
  • **
  • Posts: 931
  • Country: 00
Using difraction would need light from a defined direction.  The dispersion in glass is not very strong and it would thus only work for a rather narrow angle. There is not that much wrong with filters.

At the higher end they used to have filters that are not absorbing but reflecting part of the light. This way they could split the light to 3 CCDs for the separate color parts. The optics and alignment are still expensive and large. So if not absolutely needed a single chip with filter pattern is perferred.

The light falling on the CCD IS from a defined direction.
There's is a ton wrong with filters, mainly that they block out light, reducing captured photons, increasing noise.

I'm just suggesting where you have the 3CCD system but built within the same CCD.

 

Offline ELS122Topic starter

  • Frequent Contributor
  • **
  • Posts: 931
  • Country: 00
It would seem using diffraction instead of a bayer mask would be way more efficient and lead to less noise, As well as allowing easier adaption for different wavelength ranges.
In theory yes, its an emerging technology with early demonstrations:
https://doi.org/10.1186/s40580-023-00372-8
"Recent advancements of metalenses for functional imaging"

Thanks, that's a awesome paper!
 

Online Berni

  • Super Contributor
  • ***
  • Posts: 4974
  • Country: si
The light falling on the CCD IS from a defined direction.
There's is a ton wrong with filters, mainly that they block out light, reducing captured photons, increasing noise.

In most cameras not so much. The lens does not just throw photons at the sensor perfectly parallel.

Light takes lots of different paths trough the lens and eventually gets focused down into sharp points on the image sensor. So light is coming from all of the cameras last lens elements aperture but traveling in directions that make it converge down at the same spot where it sums up into a bright spot. So you have light hitting the sensor at a range of angles (even when you consider just a single image sensor pixel). What is worse is that these angles might change depending on what lens you use, or even change with the same lens put into different configurations (when changing aperture, zoom, focus..etc)

The camera where you do get nice parallel light from a single direction is if you build a narrow angle zoomed in pinhole camera.

In such a pinhole camera you could likely create a color image sensor by taking a B/W image sensor and placing a diffraction grating and pixel mask a precise distance away from the sensor. This would split each pixel out into a rainbow that shines on the various pixels. However pinhole cameras are absolutely atrocious in low light conditions because they collect so little light trough their tiny aperture. So it is not that useful.(tho if you slowly sweeped the mask you could make a pretty cool megapixel spectrometer with it)
« Last Edit: May 13, 2024, 11:20:23 am by Berni »
 
The following users thanked this post: Wolfram, ebastler

Online tooki

  • Super Contributor
  • ***
  • Posts: 11673
  • Country: ch
The light falling on the CCD IS from a defined direction.
Not to the precision likely needed for diffraction-based filtering. And even that only in a camera with a permanently attached fixed-focus, fixed-aperture, prime lens.

Any other kind of lens — focusable, zoom, with an iris — never mind a system camera with different lenses — and light enters at various angles depending on the situation.

Back when DSLRs were new, the sensors were more sensitive to angle than film (or modern camera sensors), which is why some “digital” lenses were released, whose optics projected the image onto the sensor at a narrower angle than equivalent film lenses. This applies specifically to wide-angle lenses, since telephoto lenses project at a very narrow angle anyway.

 

Offline switchabl

  • Frequent Contributor
  • **
  • Posts: 445
  • Country: de
Even if you somehow solve the angle issue (say you create a new camera system where all the lenses have the exit pupil at a fixed distance and you manage to design a micro-lens array that properly collimates the light at every pixel location), there are more aspects to consider.

- Luther condition: ideally the sensitivity curves for each pixel colour should be linear combinations of the so called colour matching functions (more or less the colour response of a human observer). This ensures that sources with different spectra but the same colour result in the same RGB values. There is some room for compromise usually but just cutting the spectrum into three parts is far from ideal.
- crosstalk: in theory, any linear combination will allow you to calculate the correct RGB values. In practice, if there is too much overlap in the spectral response, the inversion becomes very sensitive to small changes. The result is that colour noise is amplified significantly. AFAIK this has been one of the problems with the Foveon design.
- efficiency: with gratings, light is diffracted into different diffraction orders (at different angles) but you can generally only use one. It is possible to concentrate 80%-90% into a single order at the design wavelength with blazed or VPH gratings. But at the edges of the visible spectrum that might well drop below 50%.
- cost, obviously

It would seem using diffraction instead of a bayer mask would be way more efficient and lead to less noise, As well as allowing easier adaption for different wavelength ranges.
In theory yes, its an emerging technology with early demonstrations:
https://doi.org/10.1186/s40580-023-00372-8
"Recent advancements of metalenses for functional imaging"

Functional metalenses are a very interesting concept but as far as I can tell, we are still several scientific breakthroughs away from those being viable in practice.

Looking specifically at the colour router in section 4.1.2, all the results are with laser sources. That's no accident, the design is tailored to plane waves, normal incidence, circular polarisation and three specific wavelengths (and nothing in between). Making broadband metalenses that work across the whole visible spectrum is hard and currently comes at the cost of massively reduced efficiency.

I don't want too sound too negative, I believe there is a lot of potential there. But I think we might see the first real world applications in optical sensing or display technology where we have a lot more control over the light sources than in imaging.

In the medium term, the best chance to potentially improve on Bayer sensors is probably still Foveon. But unless someone invests a lot of money to really make a state-of-the-art version, we might never know.
 
The following users thanked this post: Wolfram, tooki, newbrain


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf