Author Topic: EEVblog #947 - Chronos High Speed Camera Review  (Read 11826 times)

0 Members and 1 Guest are viewing this topic.

Offline Cliff Matthews

  • Supporter
  • ****
  • Posts: 1466
  • Country: ca
    • General Repair and Support
Re: EEVblog #947 - Chronos High Speed Camera Review
« Reply #25 on: November 26, 2016, 12:49:36 pm »
Where would one get that many cams and tripods so easily? The resultant production might look cool though.. or jittery like a silent film?
That is routinely done in professional world for "matrix like" effects. 
De-warping would mostly work for what he is proposing but this many cameras will still be quite expensive and major hassle to set up for typical youtube like videos.
Well I stand corrected..  :palm: Searches on "Pluraleyes vs DreamSync" bring up vids on nifty specialist camera arrays and such. I suppose one could make a plexiglass 24" cylindrical blast container with 8 HS cams looking down on bursting TO-220's and rocketing el-caps.. after a while though, it may get old. Also, Dave might develop grey hair waiting for GB's of sync-age to happen.
 

Offline BobC

  • Supporter
  • ****
  • Posts: 113
Re: EEVblog #947 - Chronos High Speed Camera Review
« Reply #26 on: November 26, 2016, 06:19:24 pm »
We also had to consider what to put on top of the silicon.  We needed microlenses to improve light gathering, and we wanted the option of a Bayer mask for color imaging (we got triple resolution and quadruple sensitivity in monochrome, but almost everyone wanted color).

One more thing about Bayer filters: We also did research to see if we could tweak the sensor silicon and/or the Bayer color filters to get excellent color with less light lost in the filters. I attended a summer session at RIT's MCSL (Rochester Institute of Technology's Munsell Color Science Laboratory) to learn more about human color perception, the physics of light filtering and capture, and how color capture/rendering information is documented and shared (e.g., color profiles).  We wanted to optimize our entire imaging process from subject to camera to display.

In the late '90's Carver Mead started Foveon to pursue an interesting property of silicon: Light of different wavelengths is absorbed at different depths.  He designed the Foveon sensor to overcome the cruelest limitation of the Bayer filter matrix: The color information was captured with about 1/4 the resolution of the luminance (monochrome) information, causing significant spatial issues in color reconstruction.  The Foveon sensor did this by vertically stacking the RGB pixels, with no color filtering needed above the sensor pixel.

In the Foveon sensor, the top photosite would capture "blueish" light (+ luminance), the middle would be "greenish" and the bottom would be "reddish".  Unlike traditional color filters, silicon color filtering is extremely messy.  But the Foveon team employed some amazing math to get good color with an imaging sensitivity only slightly below that of a monochrome sensor.  But their sensor had other issues that took literally a decade to overcome, and it was relegated to niche applications.

We were very interested in the Foveon math.  Could we modify our Bayer filter matrix to have it let more light through while still getting great color reconstruction?  Could we do so well enough to eliminate the need for having a monochrome sensor option?

After all, the biggest problem in high-speed photography is capturing as many photons as possible as quickly as possible, with minimal losses along the entire optical path.  So you start with the lens: There were several f 0.95 C-mount fixed lenses on the market, and even zooms around f 1.3.

Then you add AR (anti-reflection) coatings to the glass covering the sensor in its package.  Multiple coatings are generally used, including UV and IR, though keeping losses down meant these coatings were very expensive.

You needed to put AR coatings on both sides of the glass: Light bouncing off the sensor (if any) must not be allowed to reflect into other pixels. The main visual effect of such reflection is reduced contrast.  (There are specific test patters and exposure techniques that can reveal this behavior: You brightly illuminate a single pixel then examine the scatter in neighboring pixels.)

Finally you get to the sensor itself.  First comes the microlenses.  These are generally spherical (well, hemispherical), but we developed a very slick way to make non-spherical microlenses that captured a tiny bit more light.

You carefully do all the above work to lose as few photons as possible, then you intentionally discard over half of them in the next layer, the Bayer filter matrix.  This is a knife to the heart of high-speed camera designers.  While we all love looking at vibrant color images, we also mourn the photons that died just before reaching the silicon.

We successfully created prototype sensors with Bayer color filters having wider passbands, and obtained terrific color reconstruction results.  The only problem was sensor yield: Our new filters had trouble surviving the rigors of semiconductor processing and packaging, and didn't survive at all well at the higher ends of our required operating temperature range.  It was chemistry that finally made us revert to traditional and proven color filter materials for the HG-100K sensor.

To complete the story of the photon's path, the photons surviving the Bayer filter penetrate the silicon surface, where they are absorbed.  This absorption can occur via multiple processes, but the end goal is always to have each photon dislodge one or more electrons.  These electrons are accumulated as a charge until the exposure ends, after which the electrons are read out as a current, which one or more transistors at the photosite boost before it is read from the sensor.

The physics of photon capture can get hairy at high speed, since not all absorbed photons cause "prompt" electron release.  This means the pixel must be forcibly reset before starting the next exposure, which can increase the dead-time between exposures, which in turn can decrease the overall frame rate.  There are several tricks that can be used to limit this behavior, but it is always present to some extent.

There are multiple sources of "stray" or "non-photo emission" electrons within a sensor pixel.  Most are due to thermal effects, which are seldom a problem in high-speed photography, but cause massive headaches for long-exposure astronomical photography (and hence their use of chilled sensors).

The most insidious source of stray electrons is the so-called "dark current", which is simple leakage into the photosite from nearby areas in the silicon, generally due to reverse-bias leakage.  It accumulates in the photosite from the end of the reset until the end of the next exposure.  If you put on the lens cap and take some images, you will see that none of the pixels are truly dark: All will have varying levels of electrons present even when no light is present.

Fortunately, it is straightforward to compensate for many stray electron sources by taking dark images just before each high-speed run, then processing the resulting video to remove the dark current's effect.  I strongly recommend doing this over about 2K FPS: Your processed video will look much nicer.  At slower speeds the photoelectrons will dominate over most noise sources, so there is little need for correction.

The dark current problem is enormously worse for IR cameras, which typically include an internal shutter that automatically activates every 20-60 seconds to quickly capture a fresh background image. You can see this happen in when Dave uses his FLIR camera.

So, when you follow a photon from the light source to the subject, which reflects it toward the camera lens, through optically coated glass, then through microlenses and Bayer arrays, into the silicon, conversion to electrons, readout to an ADC, storage in memory, downloading from the camera, then post-processing and finally being routed to a PC's video card and to a monitor, where a backlight shines through more color filters before being passed or blocked by the LCD layer and polarizing filters, it should become clear that there are many places for things to go wrong, and many places where corrections can be obtained and applied.

It's not just the camera:  Only when the entire imaging process is optimized end-to-end can the best results be achieved.  The camera may be the most important part of the process, but it doesn't stand alone.  It takes time to learn how to properly use a high-speed imaging system.

But you can't do much about those awful losses in the Bayer filters.  Sigh.
 
The following users thanked this post: tesla500, PA0PBZ, edavid, amirm, lukier, albert22, Cliff Matthews, MT, MK14

Online lukier

  • Supporter
  • ****
  • Posts: 549
  • Country: gb
    • Homepage
Re: EEVblog #947 - Chronos High Speed Camera Review
« Reply #27 on: November 26, 2016, 09:24:48 pm »
To complete the story of the photon's path, the photons surviving the Bayer filter penetrate the silicon surface, where they are absorbed.  This absorption can occur via multiple processes, but the end goal is always to have each photon dislodge one or more electrons.  These electrons are accumulated as a charge until the exposure ends, after which the electrons are read out as a current, which one or more transistors at the photosite boost before it is read from the sensor.

Was your sensor back-illuminated? In recent years this trick starts to enter the consumer/machine vision market (OmniBSI, Sony Pregius etc) - I guess the manufacturing process became more reliable. Increasing the fill factor improves sensitivity and now whenever I can I prefer cameras with back-illuminated sensors (e.g. PointGrey Grasshopper 3).

AFAIR before back-illumination rolling shutter sensors were a bit better in that respect because they don't need so many transistors, so there was more space for the photodiode, but well it's rolling shutter :/

Bayer filters are a pain in general. Fun stuff happens when one has a camera with Bayer filters and without IR blocking filter and it messes debayerization as IR leaks through colors, mostly through the red filter. Not the way to make multi-spectral camera.

On the computing & math side, yes awesome stuff can be done as post-processing with some clever math, especially nowadays. From single pixel compressive sensing camera to computational photography (HDR) to 3D reconstruction & tracking. The problem often is the power consumption. The sensor might be low power, but then you'll need 200W GPU to do the math and produce the output.

Well this is the case in my research environment. In production one could possibly get an ASIC made to do the math, would reduce the power consumption a lot, but still not a cheap solution. Therefore I still think that the best way is to get as much as possible done by the whole optical path at the speed of light (at least don't loose information at this stage, thus sensitivity, pixel size, low dark current, Airy disk not too big etc, ADCs have pretty poor ENOB usually so after digitalisation it might be too late).
 

Offline BobC

  • Supporter
  • ****
  • Posts: 113
Re: EEVblog #947 - Chronos High Speed Camera Review
« Reply #28 on: November 27, 2016, 09:38:03 am »
Was your sensor back-illuminated? In recent years this trick starts to enter the consumer/machine vision market (OmniBSI, Sony Pregius etc) - I guess the manufacturing process became more reliable. Increasing the fill factor improves sensitivity and now whenever I can I prefer cameras with back-illuminated sensors (e.g. PointGrey Grasshopper 3).

We went to extreme lengths to avoid BSI for several reasons, chief among them being reduced yields and reduced ruggedness (not good for surviving 100G impacts).

BSI is typically used as a last resort to improve sensor performance when the silicon can't be pushed any further.  It used to be used primarily to enhance sensitivity when the next foundry node isn't ready in time, but it is also important as pixels shrink and the front-side alluminization causes too much local diffraction (implicit filtering).  There are additional reasons to use BSI in long-exposure scientific sensors, none of which apply to high-speed video.

On the computing & math side, yes awesome stuff can be done as post-processing with some clever math, especially nowadays. From single pixel compressive sensing camera to computational photography (HDR) to 3D reconstruction & tracking. The problem often is the power consumption. The sensor might be low power, but then you'll need 200W GPU to do the math and produce the output.

High-speed camera systems tend to not have these things called "power budgets".  The only limitation is not how much power you can shove in, but how much you can take out.  And since most high-speed runs last on the order of single-digit seconds, heat removal really is not an issue: We are just fine with letting the camera get hot. And the post-processing PC cost is a minor line item next to the camera cost.

We didn't do much video processing in real-time.  Our preferred mode was to capture and download raw video, then correct it on the user's system.  However, that can become a hassle during setup and in high throughput environments, so our camera provided a real-time M-JPEG stream that included dark-current removal and single-pass color correction.  It was limited to 15 FPS full-frame (up to 60 FPS for sub-frame), which was more than good enough to help with things like camera setup.  It could also stream during full-speed capture by sub-sampling the capture stream from DRAM.

M-JPEG really kills video quality from an engineering perspective, but it makes a terrific "viewfinder mode".

Having a hardware JPEG encoder onboard allowed the camera to do some neat tricks by observing the content of the JPEG frames (which is much faster and easier than analyzing raw frames).  The camera firmware could auto-detect two important states: 1) Dark field (lens cap on) and 2) presence of a 24-patch Munsell Color-Checker. Once detected, the first would let us calculate a dark field reference from the raw video in DRAM, and the second would let us auto-generate color correction matrices.  Our desktop software would also perform the same functions, but in a much more sophisticated way.
 
The following users thanked this post: tesla500, lukier

Online mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 11288
  • Country: gb
    • Mike's Electric Stuff
Re: EEVblog #947 - Chronos High Speed Camera Review
« Reply #29 on: December 04, 2016, 02:30:00 am »
Back to the Chronos -
David - some things arising the the Amp Hour that I was wondering about.

You state that the camera's application and dev environment will be Open Source. How will this work when you move to the $5000 development environment you mentioned. Will it be partiitoned with the latter providing an API for an OS user application to run on top of, or will you just include some binary blobs for the stuff produced with that package?

You mentioned possible user hacks to increase memory, but presumably that would require FPGA changes, or have you already broken pins out to support future larger-capacity SODIMMs? Does the SODIMM format you use have a future roadmap for larger capacities in future?

Re. the DDR3 memory controller IP stuff - you said it's basically an encrypted blob - how did you fix the probles you had with it? - was it parameter tweaking, or to do with how your HDL talked to it ?
Did you have any interaction with Lattice over the issues ? I'd definitely think it would be worth suggesting that you could document how you got it working in exchange for a free license... seems like the whole point of paying for stuff like this is you shouldn't then have to dick around to get it to work!

Are you going to have some sort of seal to prevent dust getting to the sensor? - I'm visualising some sort of soft foam spacer ring sandwiched between the front case and sensor PCB

I also didn't see any provision for fixing a lanyard/wrist-strap - seems an obvious thing to have for security against dropping when handheld.

Seems like  a useful feature would be the equivalent of oscilloscope segmented memory - to shoot a fixed or variable-length  length sequence on a trigger ( with realtime timestamps) - is this implemented/planned?
Something else I have a recollection of seeing but not sure if it was yours or the FPS1000 - automatically detecting changes in the image to use as a trigger - I'd imagine this wouldn't be hard to do in the FPGA as you have plenty of space.

Lastly a hint for some cool footage - flames!
A while ago I shot some liquid lighter fuel burning in a metal tray at 1000FPS ( from memory about 6" field of view). This produced some really cool footage, which I regularly use to demo/test my various LED matrix stuff as the natural flow makes things like framerate jitter and other glitches very obvious. 
 


 
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline jonovid

  • Frequent Contributor
  • **
  • Posts: 706
  • Country: au
    • JONOVID
Re: EEVblog #947 - Chronos High Speed Camera Review
« Reply #30 on: December 04, 2016, 02:39:58 am »
is that a C mount lens, non of this autofocus stuff.   ;D
Hobby of evil genius      basic knowledge of electronics
 

Offline tesla500

  • Regular Contributor
  • *
  • Posts: 145
Re: EEVblog #947 - Chronos High Speed Camera Review
« Reply #31 on: January 07, 2017, 12:25:03 pm »
Oops, forgot to check this thread for new replies.

Back to the Chronos -
David - some things arising the the Amp Hour that I was wondering about.

You state that the camera's application and dev environment will be Open Source. How will this work when you move to the $5000 development environment you mentioned. Will it be partiitoned with the latter providing an API for an OS user application to run on top of, or will you just include some binary blobs for the stuff produced with that package?
Everything required to compile the user application is open source and freely available. The user application uses features of the dev environment that are included as binaries in the camera's OS. If users want to modify those binaries (which deal with low-level operation of the video compression and display hardware mostly), they'll need to pay for the dev environment license.

You mentioned possible user hacks to increase memory, but presumably that would require FPGA changes, or have you already broken pins out to support future larger-capacity SODIMMs? Does the SODIMM format you use have a future roadmap for larger capacities in future?
Everything is there to support dual 16GB SODIMMs, there's some bug with Lattice's controller that stops more than 1 SODIMM from working though, it doesn't initialize the MODE registers properly on the second SODIMM. Otherwise it seems to operate the memory fine, it just is "missing" because it's not been initialized. I hope to figure this out eventually, but getting the camera out is a priority.

Re. the DDR3 memory controller IP stuff - you said it's basically an encrypted blob - how did you fix the probles you had with it? - was it parameter tweaking, or to do with how your HDL talked to it ?
Did you have any interaction with Lattice over the issues ? I'd definitely think it would be worth suggesting that you could document how you got it working in exchange for a free license... seems like the whole point of paying for stuff like this is you shouldn't then have to dick around to get it to work!
AFAIK I'm the only one who has a 64-bit wide DDR3 interface working on the ECP5, I suspect Lattice themselves don't even have it working yet as they were clueless when I contacted them about the problem. Well, at least the outsourced support engineer in India was, even after contacting the USA office... I still haven't fully solved it but the workaround is good enough for now. I may be real dick and hold off telling anyone about the fix yet, as it's a significant competitive advantage right now. The next FPGA from any manufacturer that can handle this much RAM costs ~$160 vs $40 for the ECP5.

They've so far stonewalled all my requests for the source to try and fix it, even with offering to sign all the NDAs and promise to tell them about any solutions I find. All this, even though it's so tied into the chip specific hardware you could never use it on any other FPGA.

Are you going to have some sort of seal to prevent dust getting to the sensor? - I'm visualising some sort of soft foam spacer ring sandwiched between the front case and sensor PCB
Yes, I'm getting some foam cut for that.

I also didn't see any provision for fixing a lanyard/wrist-strap - seems an obvious thing to have for security against dropping when handheld.
There's very limited room in the case, but I've added a bunch of mounting holes and will have holders that could be used for a neck or wrist strap.

Seems like  a useful feature would be the equivalent of oscilloscope segmented memory - to shoot a fixed or variable-length  length sequence on a trigger ( with realtime timestamps) - is this implemented/planned?
Something else I have a recollection of seeing but not sure if it was yours or the FPS1000 - automatically detecting changes in the image to use as a trigger - I'd imagine this wouldn't be hard to do in the FPGA as you have plenty of space.
Yes, segmented memory and image based trigger will be added. I recall Phantom cameras have an "Image based auto-trigger". The lawyers really got after them on that one, there's a big disclaimer (with no "don't show this again" checkbox) whenever you start the Phantom Control Software about not using the image auto trigger to trigger any external event that could hurt someone.

Lastly a hint for some cool footage - flames!
A while ago I shot some liquid lighter fuel burning in a metal tray at 1000FPS ( from memory about 6" field of view). This produced some really cool footage, which I regularly use to demo/test my various LED matrix stuff as the natural flow makes things like framerate jitter and other glitches very obvious.
I've seen that footage in some of your videos! I'll have to do some more fire stuff, the propane balloons were really cool (or would that be hot?)!

is that a C mount lens, non of this autofocus stuff.   ;D

Yep, CS actually so you can use either CS or C lenses.
 
The following users thanked this post: Cliff Matthews

Offline Cliff Matthews

  • Supporter
  • ****
  • Posts: 1466
  • Country: ca
    • General Repair and Support
Re: EEVblog #947 - Chronos High Speed Camera Review
« Reply #32 on: April 30, 2018, 07:10:50 am »
Linus Tech Tips gives a fun update. They've moved to new digs (apparently still doing cool stuff with 7 employee's).
From what he said, I wonder if he's that 8th employee? Maybe a board-member / investor / marketing guy?
 
The following users thanked this post: MT

Offline thm_w

  • Frequent Contributor
  • **
  • Posts: 741
  • Country: ca
Re: EEVblog #947 - Chronos High Speed Camera Review
« Reply #33 on: May 01, 2018, 07:04:44 am »
Linus Tech Tips gives a fun update. They've moved to new digs (apparently still doing cool stuff with 7 employee's).
From what he said, I wonder if he's that 8th employee? Maybe a board-member / investor / marketing guy?

I don't think so, they are hiring so probably just not remembered everyone new or whatever: http://www.eevblog.com/forum/jobs/kron-technologies-is-hiring-(tesla500)-burnaby-bc/
Good video though, should get even more exposure from that.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf