General > General Technical Chat

What is a 500W HD camera and how can it be 500 million pixels?

<< < (3/7) > >>

jmelson:

--- Quote from: Circlotron on November 08, 2021, 11:08:47 pm ---And 3 separate sensors would allow for larger pixels for the same sensor size, therefore better s/n and low light performance presumably.

--- End quote ---
The dichroic image splitters use ALL the incoming light.  The splitters send the red photons to one sensor, the blue ones to another, and the rest must be the green ones going to the 3rd sensor.  That is a lot better than blocking most of the light with filters, but more expensive.
Jon

Someone:

--- Quote from: tooki on November 08, 2021, 10:23:38 pm ---
--- Quote from: Someone on November 08, 2021, 09:45:18 pm ---
--- Quote from: tooki on November 08, 2021, 06:38:39 pm ---
--- Quote from: Siwastaja on November 08, 2021, 05:30:36 pm ---In earlier broadcast video cameras (don't know the situation as it is today), it was common to optically split the image to three separate monochrome CCDs. When they got the registration exactly right, these systems really did have each pixel site registering all three color. I bet this is quite rare today as CCDs (or CMOS sensors, more often now!) can be just manufactured to have ridiculously high number of pixels, like 20 million. The resolution is huge even if you need to interpolate colors. It would be nearly impossible to split beams and register three separate CCDs to that accuracy.

--- End quote ---
Actually, true broadcast cameras are still 3-sensor devices, albeit CMOS these days and not CCD.

This article sums it up nicely: https://www.redsharknews.com/production/item/6169-whatever-happened-to-the-3-chip-camera

This is the 3-CMOS 8K Sony camera they mention: https://pro.sony/en_GB/products/4k-and-hd-camera-systems/uhc-8300

--- End quote ---
Like a "true" scotsman?
EBU disagree:
https://tech.ebu.ch/cameratests
They are happy to use specifications of performance as the measure rather than the technical way used to achieve it. While it has been difficult to match 3 (often CCD) sensor performance with a single sensor, that is falling away rapidly.
--- End quote ---
Didn’t bother to read the article I linked, didja!? Sensor performance isn’t the issue. It’s lenses. You can’t make the insane zoom lenses used in broadcast for big sensors. They’re already $50K-300K lenses; to make them for the large sensors that provide similar quality would be prohibitively large and expensive.

So the EBU doesn’t “disagree” because sensor performance was not something I even addressed.

Note also that broadcast ≠ cinema.
--- End quote ---
You can pipe a broadcast lens (B4-mount etc) to arbitrary sensors, with some fancy optics as I specifically mentioned. Either 3 chip or single sensor needs some optical processing to make it all work properly, and then along the way though [optical processing block] the image on the sensor can be scaled to match the lens interface specification.

EBU says broadcast is about measurable performance/quality of the system, not how you achieve it (they do go on to list specifically how that is achieved in current examples).


--- Quote from: jmelson on November 09, 2021, 12:46:03 am ---
--- Quote from: Circlotron on November 08, 2021, 11:08:47 pm ---And 3 separate sensors would allow for larger pixels for the same sensor size, therefore better s/n and low light performance presumably.
--- End quote ---
The dichroic image splitters use ALL the incoming light.  The splitters send the red photons to one sensor, the blue ones to another, and the rest must be the green ones going to the 3rd sensor.  That is a lot better than blocking most of the light with filters, but more expensive.
Jon
--- End quote ---
Yes, eliminating bayer loss is the major difference in performance. Some groups are trying to do that at the sensor level now.


--- Quote from: Circlotron on November 08, 2021, 11:08:47 pm ---And 3 separate sensors would allow for larger pixels for the same sensor size, therefore better s/n and low light performance presumably.
--- End quote ---
Changing the size of the sensor doesn't change the amount of light (as that was set by the lens) so larger pixels aren't always better.

Siwastaja:

--- Quote from: Someone on November 09, 2021, 12:55:30 am ---You can pipe a broadcast lens (B4-mount etc) to arbitrary sensors
--- End quote ---

Of course you can but then you are just using part of the sensor area, and the end result is equivalent to using the smaller sensor (one meant for the lens!) to begin with.

Modest sensor size has obvious set of both advantages and disadvantages. Almost ridiculously long telephoto zoom lenses is one of the advantages. So-called 3/4" sensor size has been a sweet spot for television work including news, sports etc. And it's pretty obvious that a 3CCD * 3/4" performs better than 1CCD * 3/4" in terms of resolution and SNR. If you want the same improvement by just upping the sensor size to compensate for the light and resolution loss of Bayer filter, then you need to make the lenses draw a larger image area, and increase the focal length to match the same field-of-view. This telephoto zoom lens is the costly, large and heavy part.

This is a classic optimization case, moving the complexity, weight and cost between the camera body and lenses.

Zero999:
How small can the pixels be? I would have thought that diffraction would limit the smallest pixel size to half the wavelength of the longest wavelength of light it needs to respond to. Assuming that would be 700nm, the pixels can be no smaller than 350nm, so a 500M pixel CCD would need to be larger than 350×10-9√(500×106) = 7.83mm by 7.83mm.

Siwastaja:
Getting even remotely decent SNR requires counting many thousands of photons for each video frame even assuming 100% photon to electron conversion efficiency and perfect readout and ADC. And if this is for video, you have some dozen milliseconds to collect them.

You can bin the pixels together (even at analog "charge" level) to get more sensitivity at the expense of less resolution (i.e., make the pixel size a runtime parameter), but if you are going to almost always enable binning, then it's only a negative compromise.

This sets the practical pixel size so that you get visually noise-free image in some medium amount of light. Then low-light performance can be boosted by pixel binning (analog or digital), longer exposures, noise reduction algorithms etc. etc. But if you need to rely on those in decent lighting conditions, you made the pixels too small and the high resolution is just wasted, but brings all the negative sides like reduction in fill rate, readout delays / bandwidth requirement, more expensive ADCs and processing, and so on.

Do note that use cases where high resolutions are desired, it's quite normal to require good image quality overall. Some 20-50 megapixels practically "need" the classic 36x24mm sensor size already although you could theoretically do like 1000 megapixels.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod