EEVblog Electronics Community Forum

EEVblog => EEVblog Specific => Topic started by: EEVblog on November 23, 2016, 09:38:38 am

Title: EEVblog #947 - Chronos High Speed Camera Review
Post by: EEVblog on November 23, 2016, 09:38:38 am
Dave takes a look at the 21,000fps Chonos Kickstarter high speed digital camera prototype.
https://www.kickstarter.com/projects/1714585446/chronos-14-high-speed-camera/ (https://www.kickstarter.com/projects/1714585446/chronos-14-high-speed-camera/)
Tesla500 Youtube Channel:
https://www.youtube.com/user/tesla500 (https://www.youtube.com/user/tesla500)

https://www.youtube.com/watch?v=rxYxTqALycM (https://www.youtube.com/watch?v=rxYxTqALycM)
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: mikeselectricstuff on November 23, 2016, 09:48:33 am
There is some competition - fps1000
https://www.kickstarter.com/projects/1623255426/fps1000-the-low-cost-high-frame-rate-camera (https://www.kickstarter.com/projects/1623255426/fps1000-the-low-cost-high-frame-rate-camera)
I do have some doubts over flash endurance, and whether it's even possible to do continuous record-until-triggered due to erase time.

Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: blueskull on November 23, 2016, 09:51:07 am
Here is the sensor: http://www.luxima.com/product_briefs/LUX1310.html (http://www.luxima.com/product_briefs/LUX1310.html)
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: malaire on November 23, 2016, 10:15:24 am
There is some competition - fps1000
https://www.kickstarter.com/projects/1623255426/fps1000-the-low-cost-high-frame-rate-camera (https://www.kickstarter.com/projects/1623255426/fps1000-the-low-cost-high-frame-rate-camera)
I do have some doubts over flash endurance, and whether it's even possible to do continuous record-until-triggered due to erase time.
Flash endurance and erase time are not issues for Chonos, since it doesn't save to SD card during filming - everything is kept in RAM.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: mikeselectricstuff on November 23, 2016, 11:13:03 am
Not an isssue for Chronos, but potentially is for fps1000. A while ago I looked up the datasheet for the flash they were using and it was of the order of a few thousand cycles.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: MK14 on November 23, 2016, 11:44:32 am
Not an isssue for Chronos, but potentially is for fps1000. A while ago I looked up the datasheet for the flash they were using and it was of the order of a few thousand cycles.

I've had a quick look at the FPS1000. It seems to use DRAM (which they are calling video memory), for initial storage, in a "rolling" 1 minute buffer.

From their website which takes years to open:
Quote
Large 128Gbytes video memory for 1 full minute of recording time.

So it only needs to write the final images, to flash once.

Flash would probably be way too slow (write speed), to be used at such high frame rates, without using complicated techniques to get round it.

Disclaimer: Assuming it is DRams they are using. If it is flash chips (I hope not and doubt it), then I'd be wrong.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: EEVblog on November 23, 2016, 11:48:24 am
Flash would probably be way too slow (write speed), to be used at such high frame rates, without using complicated techniques to get round it.

Yep, no way they are using Flash as the video buffer memory.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: mikeselectricstuff on November 23, 2016, 12:04:39 pm
Flash would probably be way too slow (write speed), to be used at such high frame rates, without using complicated techniques to get round it.

Yep, no way they are using Flash as the video buffer memory.
Fps1000 does use nand flash to directly store the data. Nand writes are fast, and chips typically have multiple banks and/or die to acheive fast write throughput for already-erased pages
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: bktemp on November 23, 2016, 12:11:08 pm
Flash would probably be way too slow (write speed), to be used at such high frame rates, without using complicated techniques to get round it.

Yep, no way they are using Flash as the video buffer memory.
Look at the pcb images: There is an STM32, a Lattice FPGA, 2 flash chips and the sensor and some power supply stuff on the other side. No DRAM!
They use 2x 128GBit flash ics:
https://www.micron.com/parts/nand-flash/mass-storage/mt29f128g08amcabh2-10z?pc=%7BB0761936-6571-4FB2-AAE2-6756F4D7E4E0%7D (https://www.micron.com/parts/nand-flash/mass-storage/mt29f128g08amcabh2-10z?pc=%7BB0761936-6571-4FB2-AAE2-6756F4D7E4E0%7D)
It is pretty much a toy compared to the Chronos Camera.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: jeremy on November 23, 2016, 12:41:07 pm
Anyone know if the fpga bits will be open source? I can't afford a camera like this, but I'd be very interested to see how it is actually implemented.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: MK14 on November 23, 2016, 12:50:07 pm
Surely if the FPS1000 was designed by a reasonably competent electronics engineer. It would be using Dram or Sram, for unlimited read/write cycles. (EDIT: That was when I thought the life was only a 1,000 or few, now I know it is >=60,000 it does not seem such a bad decision).

But on the other hand, a quick glance at the Flash, seems to indicate that it has a write life of 100,000 cycles. (SLC Nand)

Assuming 1 minute per complete write.

100,000 60,000 minutes = Approx 10 6 weeks at 24/7.

So 10 weeks of continuous use, is not too bad. But I would still prefer Dram/Sram.

The Micron site wants me to register in order to see the datasheet, so ...
I don't want to do that.

Ok I registered, and got the datasheet.

So the 100,000 60,000 life I gave above (from a site which DOES NOT need registering), could be wrong.
A quick look at the datasheet, seems to say that it is 60,000 program/erase cycles.

Quote
Quality and reliability
– Data retention: JESD47G compliant; see qualification
report
– Endurance: 60,000 PROGRAM/ERASE cycles
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: Cliff Matthews on November 23, 2016, 12:58:54 pm
I'd love to see some slow-mo speaker cone dance and drop coalesce, with electronics chemicals like IPA and glycol.
https://www.youtube.com/watch?v=KJDEsAy9RyM (https://www.youtube.com/watch?v=KJDEsAy9RyM) 
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: MK14 on November 23, 2016, 01:14:37 pm
I'd love to see some slow-mo speaker cone dance and drop coalesce, with electronics chemicals like IPA and glycol.

Thanks, that is quite an amazing video.
Especially the space video, later on in it, and the cello.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: MK14 on November 23, 2016, 01:23:40 pm
Flash would probably be way too slow (write speed), to be used at such high frame rates, without using complicated techniques to get round it.

Yep, no way they are using Flash as the video buffer memory.

I still don't understand why they did not just put in some Dram chips. The FPGA could surely cope, and they are reasonably cheap, readily available, in large capacities if needed. They are also very fast, if used correctly.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: bktemp on November 23, 2016, 01:40:50 pm
I still don't understand why they did not just put in some Dram chips. The FPGA could surely cope, and they are reasonably cheap, readily available, in large capacities if needed. They are also very fast, if used correctly.
Because it is more expensive: DRAM needs refreshs. You also need to read the data from DRAM and copy it to flash at some point, so you need DRAM + flash.
The fps1000 camera is build down to a price. Just compare the main components used in Chronos and fps1000: TI SoC running linux vs. STM32, Lattice FPGA (I don't know the exact part number of hand, but it is a larger one) vs. Lattice MachXO2 (low end FPGA).
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: MK14 on November 23, 2016, 02:05:56 pm
I still don't understand why they did not just put in some Dram chips. The FPGA could surely cope, and they are reasonably cheap, readily available, in large capacities if needed. They are also very fast, if used correctly.
Because it is more expensive: DRAM needs refreshs. You also need to read the data from DRAM and copy it to flash at some point, so you need DRAM + flash.
The fps1000 camera is build down to a price. Just compare the main components used in Chronos and fps1000: TI SoC running linux vs. STM32, Lattice FPGA (I don't know the exact part number of hand, but it is a larger one) vs. Lattice MachXO2 (low end FPGA).

They seem to be a VERY expensive type of Flash chip, that they have used, and there are two of them (apparently).
They are not the cheap flash pen type of flash chip. They seem to be the type used in high endurance, very high performance server type SSD drives/PCI devices. Which are very expensive.

The refresh was NOT mentioned by me, because I assumed that the writing to it, was continuous (normally), and serially linear. Which should not need refreshing, since it is continually cycling between the various address rows/columns.
Just make sure that the least significant address bits, are the ones which are used for the refresh (rows probably, else columns).
When not in use, the FPGA or micro, could keep the refresh going.

As long as the FPGA that they used (fps1000) is up to the task, it should be ok.

But I do agree with you. Very large capacities of Drams, can be expensive, bigger FPGAs can be more useful and Dram refresh is a pain (but not too difficult to handle, if you are reasonably competent at electronics/software).

I got a bit confused with the Drams, because I am so use to the types used in PCs, being readily available at reasonable prices. But that is because it is used in such huge quantities, which keeps the prices down.

Small scale production, can't buy Drams at such huge quantities, so the price goes up, and they can't get any special deals or get the ones, which are only sold to big users of drams (speculation).

Compared to cheap flash types, I agree Dram is probably a fair bit more expensive.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: boffin on November 23, 2016, 03:58:11 pm
Awesome product; I want one even thought I have zero need for it.

Bonus, looks like the guy lives really close to me (I instantly recognized streets when he was doing his electric car drive) - I wonder if he needs any help
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: BobC on November 23, 2016, 09:42:19 pm
Just after the turn of the millennium I was involved in creating the Redlake HG-100K camera (Google it if you are curious), a super high-end camera capable of taking video at up to 100K FPS while being subjected to repeated 100G impacts.  It too used C-mount lenses, which generally tore right off during extremely violent events, with no damage to the camera.

It was designed to serve two main markets: Being inside cars during crash tests (it had to survive being crushed), and being very, very close to weapons tests (it had to survive being literally blown away).

While none of the alpha builds could capture at full speed, the second beta camera could, and we found ourselves running around the lab trying to find something we could shoot at 100K FPS.  We needed a small extremely fast moving target with tons of light.

We tried the conventional "pop the water balloon" test, but that gets boring at 2K FPS, and we couldn't come close to getting enough light past 10K FPS.  We shot directly at a fluorescent bulb, but that got boring at 5K FPS, though we did get useful captures at 30K FPS.

But fully illuminated 100K captures eluded us.  We considered aiming it straight at the sun, but elected to not melt our first working camera.  We held a meeting to work through the issue, with a powerpoint presentation to review all we had tried so far.  At the second slide we all looked at each other, then grabbed the projector and ran into the lab, where we took it apart to expose the bulb.

We slapped on a tele-macro lens and immediately got absolutely gorgeous video of the arc wandering within the bulb.  I ran it through some video analysis software to quantize how the arc volume, position and velocity changed with time, generated a quick data overlay, then posted the video on our website.

The next morning we got a call from Japan asking for an on-site visit.  The manufacturer of the projector we used was having optical path issues they had traced to the arc behavior, and they wanted to bring some prototype bulbs for us to image.  Of course we said yes, and their larger than expected team arrived two days later.  The Japanese scientists and executives were so impressed that they offered us an obscene amount of money to let them take one of the beta units home.  I mean really obscene.  Of course we said yes!  Our very first production unit went to them so we could get the beta unit back.

One of the things we had learned with prior high-speed digital cameras was the importance of good IR filtering.  We didn't have bright and cool LED lighting at the time, so we were using arc lamps, which generate a ton of IR.  Most lenses are more than willing to focus IR right onto your sensor, which will promptly overheat and melt.  We didn't let that happen, of course, since we had the foresight to build temperature sensors into our custom imager die.

Yes, we had to design custom silicon for this beast, and we went to the top pixel designers on the planet (at the time) to handle our needs.  Until then, all cameras above 1K FPS (including our own) used CCDs.  We wanted to go with CMOS for several reasons, but we needed to confront many issues to do so, the most important being pixel noise, shutter control, and readout issues.  The sensor had many more readout channels than any other CMOS sensor of the day.  It also had multiple shutter modes (including rolling and global).

The most critical decision was what foundry and feature size to use.  We intentionally went with a prior-generation feature size at a foundry that had truly mastered it, which not only reduced our risks, but also gave us a far better sensor in the end.

We also had to consider what to put on top of the silicon.  We needed microlenses to improve light gathering, and we wanted the option of a Bayer mask for color imaging (we got triple resolution and quadruple sensitivity in monochrome, but almost everyone wanted color).

Once the overall system design was finalized, my job became the design and implementation of the software control interfaces within the camera: The outward-facing interface (protocol/API) for our GUI control app and the customer's industrial automation software to access, and the inward-facing interfaces to the imaging pipeline control FPGAs and other hardware subsystems such as the Ethernet controller, video encoder, and so on.

The GUI team was working extremely hard to find ways to intelligently provide access to the huge number of camera features.  The external camera interface evolved at a rapid pace to support them.  Many camera settings had useful ranges that varied based on the values of other settings. The overall state machine was a true nightmare, far beyond what the GUI folks could support in a reasonable amount of time (and impossible, truth be told, since the camera FPGAs were still being tweaked).

We needed the camera to provide an "always valid" operational state, which meant returning errors when the user tried to change a parameter to a value that would cause an invalid configuration.  But this made the configuration process extremely delicate and error-prone.  We needed a way to interactively "evolve" the configuration past/through temporarily invalid intermediate configuration states.

So I abstracted the external API to permit it to virtualize itself and provide a "What if?" configuration mode that the GUI could use to provide error-free interactive feedback to the user.  This required minor FPGA changes to support a "try but don't die" configuration mode, which later turned out to have extremely valuable uses beyond the initial camera configuration.

I also wrote the software for the production test gigs, which was a total blast.  We needed to keep things simple for the assemblers and testers while collecting a ton of data for QA and QC purposes, the initial goal being to rapidly evolve our design during the first production run to make it easier and cheaper to manufacture.

Unfortunately, the flood of pre-orders we expected (and were counting upon) failed to materialize, and the company had to sell itself to get the funds needed to push the HG-100K into production.  After which the entire engineering department was laid off, since the new owner wanted only the existing products, not the development team.

I did make one big mistake a month after the layoff: I was offered a lucrative short-term consulting contract, which I turned down out of spite.  Silly and stupid, for multiple reasons.  It would have taken the pressure off my job search while letting me once again work on a product I loved, neither of which had anything to do with the new owners. Who weren't bad folks: They did offer jobs with relocation benefits to our entire production team and to most of the customer support / field engineering team.

I suppose I should take a look at the Chronos hardware and software...
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: Bud on November 24, 2016, 12:09:29 am
I'd love to see some slow-mo speaker cone dance and drop coalesce, with electronics chemicals like IPA and glycol.
https://www.youtube.com/watch?v=KJDEsAy9RyM (https://www.youtube.com/watch?v=KJDEsAy9RyM)

i hope this guy will rot in Hell for injecting the stupid color table every few  seconds into the video. Good example how one should not make videos  :rant:
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: tesla500 on November 24, 2016, 03:27:56 am
Just after the turn of the millennium I was involved in creating the Redlake HG-100K camera (Google it if you are curious), a super high-end camera capable of taking video at up to 100K FPS while being subjected to repeated 100G impacts.  It too used C-mount lenses, which generally tore right off during extremely violent events, with no damage to the camera.
....

Wow! I absolutely love industry behind the scenes stories like this! Thanks very much!

This was before Vision Research had really taken off, wasn't it?

1504 x 1128 1000fps was really good for that time. How many readout channels did you end up using? I'm surprised you didn't get many orders. Too expensive?

David
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: joeqsmith on November 24, 2016, 04:16:30 am
Dave, I watched your review (along with most of the videos I could find on youtube for it).  I am interested in this camera as a lab tool but I am not sure if it would be much better than your Sony for what I want to do with it.    I would be very interested in seeing some clips of it compared with the Sony looking at some arcs. 

https://www.youtube.com/watch?v=cC6fMnjqY8o&feature=youtu.be (https://www.youtube.com/watch?v=cC6fMnjqY8o&feature=youtu.be)
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: BobC on November 24, 2016, 05:40:36 am
This was before Vision Research had really taken off, wasn't it?

1504 x 1128 1000fps was really good for that time. How many readout channels did you end up using? I'm surprised you didn't get many orders. Too expensive?

Well, what killed us was Photon.  Turned out they used the same sensor designers we did.  Their resulting camera was inferior to the HG-100K, but it covered 90% of our market with 80% of our specs for 50% of the price.  Plus they had a much larger distributor infrastructure and better funded marketing.  We simply weren't far enough ahead in technology to warrant a premium price, and revenue projections showed it.

The line was eventually profitable in the ruggedized and high-G markets, since we never lied about our environmental specs, and pretty much everyone else did.  The HG-100K was built to military standards, and it had the best warranty by far.

If the HG-100K had any functional issues, it was simply that it was heavy.  In an early car crash test one was ripped from its mount.  The customer had to design new mounts, which was already on their to-do list.  Seeing the camera come loose at 50 MPH and fly across the lab certainly caused some hearts to stop, but the camera was good to go after a lens swap.  Our cables were designed to separate outside the camera, so the connectors had no damage.

And no video was lost!  The camera stopped recording when the cables separated, but a small internal battery kept the DRAM alive for up to 24 hours, so they had lots of time to go get the camera, dust it off, hook it up and then download the video.

Crash tests are extremely expensive: Losing data is NOT an option, no matter what, and I believe the HG-100K was the only camera that delivered 100% in that area.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: edy on November 25, 2016, 04:37:54 am
Awesome camera! I was wondering what you thought of this idea. Parallax issues aside, would it be feasible to take say get an array of 20 cameras recording the same scene at 60 fps but have them each start at a delayed time from the next by 1/20th of a 1/60th of a second... so basically they are recording the same scene but out of sync by 1/180th of a second from each other.

So camera 1 takes frame 1 at time 0/180 seconds, camera 2 takes frame 1 at time 1/180, camera 3 at 2/180, etc.... until camera 20 is at time 19/180 of the first second. Then camera 1 takes it's 2nd frame at 20/180 (or 1/60), camera 2 takes frame 2 at time 21/180, etc.... So you are parallel capturing at 1/60th of a second but using slightly out of sync cameras so in effect you are getting 180 fps. It's not much, but may be a technique to ramp up speed of capture with cheaper components.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: Cliff Matthews on November 25, 2016, 08:23:45 pm
Awesome camera! I was wondering what you thought of this idea. Parallax issues aside, would it be feasible to take say get an array of 20 cameras recording the same scene at 60 fps but have them each start at a delayed time from the next by 1/20th of a 1/60th of a second... so basically they are recording the same scene but out of sync by 1/180th of a second from each other.

So camera 1 takes frame 1 at time 0/180 seconds, camera 2 takes frame 1 at time 1/180, camera 3 at 2/180, etc.... until camera 20 is at time 19/180 of the first second. Then camera 1 takes it's 2nd frame at 20/180 (or 1/60), camera 2 takes frame 2 at time 21/180, etc.... So you are parallel capturing at 1/60th of a second but using slightly out of sync cameras so in effect you are getting 180 fps. It's not much, but may be a technique to ramp up speed of capture with cheaper components.
Appears futile. Reassembly from compressed would likely be a processing nightmare, as would white balance symmetry, and cheaper consumo-grade cams lack the oomph to save raw format. Parallax and shutter sync are issues that just can't be brushed aside. Where would one get that many cams and tripods so easily? The resultant production might look cool though.. or jittery like a silent film?
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: amirm on November 25, 2016, 10:24:29 pm
Where would one get that many cams and tripods so easily? The resultant production might look cool though.. or jittery like a silent film?
That is routinely done in professional world for "matrix like" effects.  Here is an example:

(http://mitchmartinez.com/wp-content/uploads/2015/01/outside-rig1-2.jpg)

De-warping would mostly work for what he is proposing but this many cameras will still be quite expensive and major hassle to set up for typical youtube like videos.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: Cliff Matthews on November 26, 2016, 01:49:36 am
Where would one get that many cams and tripods so easily? The resultant production might look cool though.. or jittery like a silent film?
That is routinely done in professional world for "matrix like" effects. 
De-warping would mostly work for what he is proposing but this many cameras will still be quite expensive and major hassle to set up for typical youtube like videos.
Well I stand corrected..  :palm: Searches on "Pluraleyes vs DreamSync" bring up vids on nifty specialist camera arrays and such. I suppose one could make a plexiglass 24" cylindrical blast container with 8 HS cams looking down on bursting TO-220's and rocketing el-caps.. after a while though, it may get old. Also, Dave might develop grey hair waiting for GB's of sync-age to happen.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: BobC on November 26, 2016, 07:19:24 am
We also had to consider what to put on top of the silicon.  We needed microlenses to improve light gathering, and we wanted the option of a Bayer mask for color imaging (we got triple resolution and quadruple sensitivity in monochrome, but almost everyone wanted color).

One more thing about Bayer filters: We also did research to see if we could tweak the sensor silicon and/or the Bayer color filters to get excellent color with less light lost in the filters. I attended a summer session at RIT's MCSL (Rochester Institute of Technology's Munsell Color Science Laboratory) to learn more about human color perception, the physics of light filtering and capture, and how color capture/rendering information is documented and shared (e.g., color profiles).  We wanted to optimize our entire imaging process from subject to camera to display.

In the late '90's Carver Mead started Foveon to pursue an interesting property of silicon: Light of different wavelengths is absorbed at different depths.  He designed the Foveon sensor to overcome the cruelest limitation of the Bayer filter matrix: The color information was captured with about 1/4 the resolution of the luminance (monochrome) information, causing significant spatial issues in color reconstruction.  The Foveon sensor did this by vertically stacking the RGB pixels, with no color filtering needed above the sensor pixel.

In the Foveon sensor, the top photosite would capture "blueish" light (+ luminance), the middle would be "greenish" and the bottom would be "reddish".  Unlike traditional color filters, silicon color filtering is extremely messy.  But the Foveon team employed some amazing math to get good color with an imaging sensitivity only slightly below that of a monochrome sensor.  But their sensor had other issues that took literally a decade to overcome, and it was relegated to niche applications.

We were very interested in the Foveon math.  Could we modify our Bayer filter matrix to have it let more light through while still getting great color reconstruction?  Could we do so well enough to eliminate the need for having a monochrome sensor option?

After all, the biggest problem in high-speed photography is capturing as many photons as possible as quickly as possible, with minimal losses along the entire optical path.  So you start with the lens: There were several f 0.95 C-mount fixed lenses on the market, and even zooms around f 1.3.

Then you add AR (anti-reflection) coatings to the glass covering the sensor in its package.  Multiple coatings are generally used, including UV and IR, though keeping losses down meant these coatings were very expensive.

You needed to put AR coatings on both sides of the glass: Light bouncing off the sensor (if any) must not be allowed to reflect into other pixels. The main visual effect of such reflection is reduced contrast.  (There are specific test patters and exposure techniques that can reveal this behavior: You brightly illuminate a single pixel then examine the scatter in neighboring pixels.)

Finally you get to the sensor itself.  First comes the microlenses.  These are generally spherical (well, hemispherical), but we developed a very slick way to make non-spherical microlenses that captured a tiny bit more light.

You carefully do all the above work to lose as few photons as possible, then you intentionally discard over half of them in the next layer, the Bayer filter matrix.  This is a knife to the heart of high-speed camera designers.  While we all love looking at vibrant color images, we also mourn the photons that died just before reaching the silicon.

We successfully created prototype sensors with Bayer color filters having wider passbands, and obtained terrific color reconstruction results.  The only problem was sensor yield: Our new filters had trouble surviving the rigors of semiconductor processing and packaging, and didn't survive at all well at the higher ends of our required operating temperature range.  It was chemistry that finally made us revert to traditional and proven color filter materials for the HG-100K sensor.

To complete the story of the photon's path, the photons surviving the Bayer filter penetrate the silicon surface, where they are absorbed.  This absorption can occur via multiple processes, but the end goal is always to have each photon dislodge one or more electrons.  These electrons are accumulated as a charge until the exposure ends, after which the electrons are read out as a current, which one or more transistors at the photosite boost before it is read from the sensor.

The physics of photon capture can get hairy at high speed, since not all absorbed photons cause "prompt" electron release.  This means the pixel must be forcibly reset before starting the next exposure, which can increase the dead-time between exposures, which in turn can decrease the overall frame rate.  There are several tricks that can be used to limit this behavior, but it is always present to some extent.

There are multiple sources of "stray" or "non-photo emission" electrons within a sensor pixel.  Most are due to thermal effects, which are seldom a problem in high-speed photography, but cause massive headaches for long-exposure astronomical photography (and hence their use of chilled sensors).

The most insidious source of stray electrons is the so-called "dark current", which is simple leakage into the photosite from nearby areas in the silicon, generally due to reverse-bias leakage.  It accumulates in the photosite from the end of the reset until the end of the next exposure.  If you put on the lens cap and take some images, you will see that none of the pixels are truly dark: All will have varying levels of electrons present even when no light is present.

Fortunately, it is straightforward to compensate for many stray electron sources by taking dark images just before each high-speed run, then processing the resulting video to remove the dark current's effect.  I strongly recommend doing this over about 2K FPS: Your processed video will look much nicer.  At slower speeds the photoelectrons will dominate over most noise sources, so there is little need for correction.

The dark current problem is enormously worse for IR cameras, which typically include an internal shutter that automatically activates every 20-60 seconds to quickly capture a fresh background image. You can see this happen in when Dave uses his FLIR camera.

So, when you follow a photon from the light source to the subject, which reflects it toward the camera lens, through optically coated glass, then through microlenses and Bayer arrays, into the silicon, conversion to electrons, readout to an ADC, storage in memory, downloading from the camera, then post-processing and finally being routed to a PC's video card and to a monitor, where a backlight shines through more color filters before being passed or blocked by the LCD layer and polarizing filters, it should become clear that there are many places for things to go wrong, and many places where corrections can be obtained and applied.

It's not just the camera:  Only when the entire imaging process is optimized end-to-end can the best results be achieved.  The camera may be the most important part of the process, but it doesn't stand alone.  It takes time to learn how to properly use a high-speed imaging system.

But you can't do much about those awful losses in the Bayer filters.  Sigh.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: lukier on November 26, 2016, 10:24:48 am
To complete the story of the photon's path, the photons surviving the Bayer filter penetrate the silicon surface, where they are absorbed.  This absorption can occur via multiple processes, but the end goal is always to have each photon dislodge one or more electrons.  These electrons are accumulated as a charge until the exposure ends, after which the electrons are read out as a current, which one or more transistors at the photosite boost before it is read from the sensor.

Was your sensor back-illuminated? In recent years this trick starts to enter the consumer/machine vision market (OmniBSI, Sony Pregius etc) - I guess the manufacturing process became more reliable. Increasing the fill factor improves sensitivity and now whenever I can I prefer cameras with back-illuminated sensors (e.g. PointGrey Grasshopper 3).

AFAIR before back-illumination rolling shutter sensors were a bit better in that respect because they don't need so many transistors, so there was more space for the photodiode, but well it's rolling shutter :/

Bayer filters are a pain in general. Fun stuff happens when one has a camera with Bayer filters and without IR blocking filter and it messes debayerization as IR leaks through colors, mostly through the red filter. Not the way to make multi-spectral camera.

On the computing & math side, yes awesome stuff can be done as post-processing with some clever math, especially nowadays. From single pixel compressive sensing camera to computational photography (HDR) to 3D reconstruction & tracking. The problem often is the power consumption. The sensor might be low power, but then you'll need 200W GPU to do the math and produce the output.

Well this is the case in my research environment. In production one could possibly get an ASIC made to do the math, would reduce the power consumption a lot, but still not a cheap solution. Therefore I still think that the best way is to get as much as possible done by the whole optical path at the speed of light (at least don't loose information at this stage, thus sensitivity, pixel size, low dark current, Airy disk not too big etc, ADCs have pretty poor ENOB usually so after digitalisation it might be too late).
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: BobC on November 26, 2016, 10:38:03 pm
Was your sensor back-illuminated? In recent years this trick starts to enter the consumer/machine vision market (OmniBSI, Sony Pregius etc) - I guess the manufacturing process became more reliable. Increasing the fill factor improves sensitivity and now whenever I can I prefer cameras with back-illuminated sensors (e.g. PointGrey Grasshopper 3).

We went to extreme lengths to avoid BSI for several reasons, chief among them being reduced yields and reduced ruggedness (not good for surviving 100G impacts).

BSI is typically used as a last resort to improve sensor performance when the silicon can't be pushed any further.  It used to be used primarily to enhance sensitivity when the next foundry node isn't ready in time, but it is also important as pixels shrink and the front-side alluminization causes too much local diffraction (implicit filtering).  There are additional reasons to use BSI in long-exposure scientific sensors, none of which apply to high-speed video.

On the computing & math side, yes awesome stuff can be done as post-processing with some clever math, especially nowadays. From single pixel compressive sensing camera to computational photography (HDR) to 3D reconstruction & tracking. The problem often is the power consumption. The sensor might be low power, but then you'll need 200W GPU to do the math and produce the output.

High-speed camera systems tend to not have these things called "power budgets".  The only limitation is not how much power you can shove in, but how much you can take out.  And since most high-speed runs last on the order of single-digit seconds, heat removal really is not an issue: We are just fine with letting the camera get hot. And the post-processing PC cost is a minor line item next to the camera cost.

We didn't do much video processing in real-time.  Our preferred mode was to capture and download raw video, then correct it on the user's system.  However, that can become a hassle during setup and in high throughput environments, so our camera provided a real-time M-JPEG stream that included dark-current removal and single-pass color correction.  It was limited to 15 FPS full-frame (up to 60 FPS for sub-frame), which was more than good enough to help with things like camera setup.  It could also stream during full-speed capture by sub-sampling the capture stream from DRAM.

M-JPEG really kills video quality from an engineering perspective, but it makes a terrific "viewfinder mode".

Having a hardware JPEG encoder onboard allowed the camera to do some neat tricks by observing the content of the JPEG frames (which is much faster and easier than analyzing raw frames).  The camera firmware could auto-detect two important states: 1) Dark field (lens cap on) and 2) presence of a 24-patch Munsell Color-Checker. Once detected, the first would let us calculate a dark field reference from the raw video in DRAM, and the second would let us auto-generate color correction matrices.  Our desktop software would also perform the same functions, but in a much more sophisticated way.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: mikeselectricstuff on December 03, 2016, 03:30:00 pm
Back to the Chronos -
David - some things arising the the Amp Hour that I was wondering about.

You state that the camera's application and dev environment will be Open Source. How will this work when you move to the $5000 development environment you mentioned. Will it be partiitoned with the latter providing an API for an OS user application to run on top of, or will you just include some binary blobs for the stuff produced with that package?

You mentioned possible user hacks to increase memory, but presumably that would require FPGA changes, or have you already broken pins out to support future larger-capacity SODIMMs? Does the SODIMM format you use have a future roadmap for larger capacities in future?

Re. the DDR3 memory controller IP stuff - you said it's basically an encrypted blob - how did you fix the probles you had with it? - was it parameter tweaking, or to do with how your HDL talked to it ?
Did you have any interaction with Lattice over the issues ? I'd definitely think it would be worth suggesting that you could document how you got it working in exchange for a free license... seems like the whole point of paying for stuff like this is you shouldn't then have to dick around to get it to work!

Are you going to have some sort of seal to prevent dust getting to the sensor? - I'm visualising some sort of soft foam spacer ring sandwiched between the front case and sensor PCB

I also didn't see any provision for fixing a lanyard/wrist-strap - seems an obvious thing to have for security against dropping when handheld.

Seems like  a useful feature would be the equivalent of oscilloscope segmented memory - to shoot a fixed or variable-length  length sequence on a trigger ( with realtime timestamps) - is this implemented/planned?
Something else I have a recollection of seeing but not sure if it was yours or the FPS1000 - automatically detecting changes in the image to use as a trigger - I'd imagine this wouldn't be hard to do in the FPGA as you have plenty of space.

Lastly a hint for some cool footage - flames!
A while ago I shot some liquid lighter fuel burning in a metal tray at 1000FPS ( from memory about 6" field of view). This produced some really cool footage, which I regularly use to demo/test my various LED matrix stuff as the natural flow makes things like framerate jitter and other glitches very obvious. 
 


 
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: jonovid on December 03, 2016, 03:39:58 pm
is that a C mount lens, non of this autofocus stuff.   ;D
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: tesla500 on January 07, 2017, 01:25:03 am
Oops, forgot to check this thread for new replies.

Back to the Chronos -
David - some things arising the the Amp Hour that I was wondering about.

You state that the camera's application and dev environment will be Open Source. How will this work when you move to the $5000 development environment you mentioned. Will it be partiitoned with the latter providing an API for an OS user application to run on top of, or will you just include some binary blobs for the stuff produced with that package?
Everything required to compile the user application is open source and freely available. The user application uses features of the dev environment that are included as binaries in the camera's OS. If users want to modify those binaries (which deal with low-level operation of the video compression and display hardware mostly), they'll need to pay for the dev environment license.

You mentioned possible user hacks to increase memory, but presumably that would require FPGA changes, or have you already broken pins out to support future larger-capacity SODIMMs? Does the SODIMM format you use have a future roadmap for larger capacities in future?
Everything is there to support dual 16GB SODIMMs, there's some bug with Lattice's controller that stops more than 1 SODIMM from working though, it doesn't initialize the MODE registers properly on the second SODIMM. Otherwise it seems to operate the memory fine, it just is "missing" because it's not been initialized. I hope to figure this out eventually, but getting the camera out is a priority.

Re. the DDR3 memory controller IP stuff - you said it's basically an encrypted blob - how did you fix the probles you had with it? - was it parameter tweaking, or to do with how your HDL talked to it ?
Did you have any interaction with Lattice over the issues ? I'd definitely think it would be worth suggesting that you could document how you got it working in exchange for a free license... seems like the whole point of paying for stuff like this is you shouldn't then have to dick around to get it to work!
AFAIK I'm the only one who has a 64-bit wide DDR3 interface working on the ECP5, I suspect Lattice themselves don't even have it working yet as they were clueless when I contacted them about the problem. Well, at least the outsourced support engineer in India was, even after contacting the USA office... I still haven't fully solved it but the workaround is good enough for now. I may be real dick and hold off telling anyone about the fix yet, as it's a significant competitive advantage right now. The next FPGA from any manufacturer that can handle this much RAM costs ~$160 vs $40 for the ECP5.

They've so far stonewalled all my requests for the source to try and fix it, even with offering to sign all the NDAs and promise to tell them about any solutions I find. All this, even though it's so tied into the chip specific hardware you could never use it on any other FPGA.

Are you going to have some sort of seal to prevent dust getting to the sensor? - I'm visualising some sort of soft foam spacer ring sandwiched between the front case and sensor PCB
Yes, I'm getting some foam cut for that.

I also didn't see any provision for fixing a lanyard/wrist-strap - seems an obvious thing to have for security against dropping when handheld.
There's very limited room in the case, but I've added a bunch of mounting holes and will have holders that could be used for a neck or wrist strap.

Seems like  a useful feature would be the equivalent of oscilloscope segmented memory - to shoot a fixed or variable-length  length sequence on a trigger ( with realtime timestamps) - is this implemented/planned?
Something else I have a recollection of seeing but not sure if it was yours or the FPS1000 - automatically detecting changes in the image to use as a trigger - I'd imagine this wouldn't be hard to do in the FPGA as you have plenty of space.
Yes, segmented memory and image based trigger will be added. I recall Phantom cameras have an "Image based auto-trigger". The lawyers really got after them on that one, there's a big disclaimer (with no "don't show this again" checkbox) whenever you start the Phantom Control Software about not using the image auto trigger to trigger any external event that could hurt someone.

Lastly a hint for some cool footage - flames!
A while ago I shot some liquid lighter fuel burning in a metal tray at 1000FPS ( from memory about 6" field of view). This produced some really cool footage, which I regularly use to demo/test my various LED matrix stuff as the natural flow makes things like framerate jitter and other glitches very obvious.
I've seen that footage in some of your videos! I'll have to do some more fire stuff, the propane balloons were really cool (or would that be hot?)!

is that a C mount lens, non of this autofocus stuff.   ;D

Yep, CS actually so you can use either CS or C lenses.
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: Cliff Matthews on April 29, 2018, 09:10:50 pm
Linus Tech Tips gives a fun update. They've moved to new digs (apparently still doing cool stuff with 7 employee's).
From what he said, I wonder if he's that 8th employee? Maybe a board-member / investor / marketing guy?
https://www.youtube.com/watch?v=2FJXOXSLc3k (https://www.youtube.com/watch?v=2FJXOXSLc3k)
Title: Re: EEVblog #947 - Chronos High Speed Camera Review
Post by: thm_w on April 30, 2018, 09:04:44 pm
Linus Tech Tips gives a fun update. They've moved to new digs (apparently still doing cool stuff with 7 employee's).
From what he said, I wonder if he's that 8th employee? Maybe a board-member / investor / marketing guy?

I don't think so, they are hiring so probably just not remembered everyone new or whatever: https://www.eevblog.com/forum/jobs/kron-technologies-is-hiring-(tesla500)-burnaby-bc/ (https://www.eevblog.com/forum/jobs/kron-technologies-is-hiring-(tesla500)-burnaby-bc/)
Good video though, should get even more exposure from that.