Author Topic: Inquiry regarding combining two or more cameras together for higher resolution  (Read 1748 times)

0 Members and 1 Guest are viewing this topic.

Offline avb284782235Topic starter

  • Newbie
  • Posts: 4
  • Country: us
Hello, I have a question regarding if anybody has any ideas on how to combine two digital camera feeds together to achieve a near real time super resolution, from two camera modules placed right next to each other, or slightly distanced.  I have done some research on this topic , perhaps not enough, however I could not find a solution that I would be able to replicate without an extensive background of technical knowledge. 

I am trying to combine two digital camera feeds from two flir lepton 3.5 thermal imaging modules to achieve a higher resolution for a hobby first person view use case, or possibly other digital output cameras.  I have found a version of the flir lepton that outputs an analog signal, would this perhaps be an easier start to super impose the images with some kind of image processing to achieve a higher resolution.  Also, latency would seem to be a concern, as well as the board used for micro uav platforms under the 250gram range.  I know my question if a little out of order in terms of what I’m trying to achieve, I would just like to know if anyone has any pointers or could possibly provide assistance in this matter.  This idea/ possible project would be for an rc fpv use case, so I wanted to know if there are any lightweight preexisting products that could achieve this (although I presume they would have to be custom made or have a high expense), with low latency of a couple milliseconds), or if a raspberry pi, or similar board, could achieve this.

Thank you in advance.

Lastly,
 

Offline Terry Bites

  • Super Contributor
  • ***
  • Posts: 2957
  • Country: gb
  • Recovering Electrical Engineer

You an stack images in photoshop, lightroom etc. or with the free softwaware sequator.
Averaging increases the SNR.


Easier with one camera. If you can't match the optics or the sensors it will be very difficult to register the overlay of the images. [I may be wrong]
Identical lenses and cameras will have different distortions.
https://photographylife.com/night-sky-image-stackingphotographylife.com/night-sky-image-stacking
https://sites.google.com/view/sequator/
 

Offline todorp

  • Regular Contributor
  • *
  • Posts: 77
  • Country: it
Are you trying to see finer details or extend dynamic range?
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5889
  • Country: us
There are methods to achieve super resolution, there is always a penalty to pay.  The math is complex, and doing it "real time" requires a powerful processing engine.  The basic idea is to interpolate between slightly shifted images to estimate the actual intensity at points between the physical pixels.  It can be done using a single camera if the camera is precisely and repetitively moved in a circle or other known motion a few pixels in dimension.  Resulting resolution improves by something like the square root of the number of frames processed.  The improvement possible is limited by a number of noise processes in both the image and in the measurement of image position.  The processed image lags by the number of frames used plus processing time but the improvements in resolution can be stunning.  Googling super resolution should give links to papers on the subject, but as I said the math is daunting.

If you want to achieve more pixels for a given angular field of view a much simpler (but still cumbersome) process can be used.  Supply each camera with a lens of focal length about twice what would be used to get the overall FOV you want.  Carefully align the cameras so their FOVs slightly overlap.  Create a frame memory sized for your objective field of view.  Map each camera into the appropriate location in the frame memory.  And then read out the whole memory to your display.  This process can be done in software on appropriate processors, though I can't name any off the shelf solutions.  As mentioned in prior posts this process will result in image artifacts on the join lines, which can be improved, but not eliminated through care in camera alignment.  It will also result in an image with 'unconventional' distortion.  For example instead of normal barrel distortion you will get four barrels.  Whether this is important depends on many factors.
 

Offline LaserSteve

  • Super Contributor
  • ***
  • Posts: 1500
  • Country: us
Place the tiny thermal  cameras side by side. Synchronize the video.

Present the respective eye with a proper view.  Slightly overlap the views in the x / horizontal plane. With proper adjustable mounting hardware (Thorlabs kMS-1) and a judicious choice of lens, your brain will do the fusion for you.
But only in the horizontal plane. Choice of lens, as mentioned above, is crucial.

However a 9 Hz update rate, per the treaty requirement, is going to make a really bad FPV. 

Yes, I've done this with visible spectrum cameras. In my case for field sequential stereo imaging  in a lab.  NTSC with a bit of high speed switching circuitry made this a piece of cake.

We also had a Pixera still camera, which had a piezo stage to move the CCD one or more pixels. The enhancement is fantastic, but the processing latency is long. In its time, the cost performance ratio was amazing, but today one ccd is more then sufficient.

Did I mention the headache that occurs with simple sensor fusion if
the alignment is not perfect?  Did I mention that the inter-ocular distance of the cameras has to match the human eye
Inter-ocular distance with-in a factor of 2?

By the time you work all of that out, buying some 240 x320 or 640 x 480
camera is a better idea.

Steve






 
« Last Edit: October 10, 2022, 09:22:13 pm by LaserSteve »
"Programming is more than an important practical art. It is also a gigantic undertaking in the foundations of knowledge"

Adm. Grace Hopper
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf