Author Topic: Improving thermal sensor resolution by vibrating/moving sensor while shooting  (Read 5478 times)

0 Members and 1 Guest are viewing this topic.

Offline SilverSolderTopic starter

  • Super Contributor
  • ***
  • Posts: 6126
  • Country: 00
Increasing a camera's resolution in real terms is possible by taking several images of the same subject, moving the camera slightly between each shot, so different parts of the sensor gets used (each pixel gets overlaid in a different way), after which the images are effectively averaged together. 

I used an older Seek Thermal XR to take this image of a USB network adaptor, attempting to use this type of sampling technique to increase resolution.

As you can see, it is clearly possible to read the text on the USB adaptor, shot from about 1 foot / 30cm away - which is absolutely not possible with the native resolution picture.   This upsampling technique is being built into recent cameras (Olympus springs to mind, where the sensor cleaning piezo does double duty for cleaning and improving resolution) and I wonder how long it will be until we see this relatively cheap approach appear in thermal cameras?





Native resolution image for comparison:




« Last Edit: June 02, 2020, 12:50:21 am by SilverSolder »
 
The following users thanked this post: I wanted a rude username

Offline zrq

  • Frequent Contributor
  • **
  • Posts: 278
  • Country: 00
I doubt that resolution can be improved by only averaging registrated frames. While registration of moved frames can be very useful to reduce noise and detector nonuniformity, the result is likely to be a low noise but blured image without improvement of resolution. Some more complicated algorithms are needed to recover the information under the blur. You may search for multiframe superresolution or video superresolution to find out more.

List of some easy start point with opensource code (and pretrained models) to copy ;)
Bilateral TV-L1: classical algorithm derived by formulating the problem as a regularized optimization problem, not producing really good result in my tests, implemented in OpenCV (no python binding) https://docs.opencv.org/master/d7/d0a/group__superres.html https://github.com/opencv/opencv/blob/master/samples/gpu/super_resolution.cpp
PFNL&FRVSR: DL based algorithm, implemented in Tensorflow 1.x, https://github.com/psychopa4/PFNL
EDVR: DL based algorithm, implemented in PyTorch, claiming SOTA performance but produces more artifacts than FRVSR in my tests, need to hack in the dcn module to make inference work on CPU. https://github.com/open-mmlab/mmsr

As superresolution is a hot topic in today's ML community, new algorithms are published periodically, check here for updates.
https://github.com/ChaofWang/Awesome-Super-Resolution
This repo also looks great but I never get it working on my machine for some reason
https://github.com/LoSealL/VideoSuperResolution
« Last Edit: June 02, 2020, 10:44:15 am by zrq »
 

Offline Bud

  • Super Contributor
  • ***
  • Posts: 6911
  • Country: ca
It is already there in higher end Flir cameras, called Superresolution. Also people posted thermal images on this board experimenting with image stacking/superresolution. I recall Flir explains in those cameras' user manuals how it works and provides guidance how to use it.
Facebook-free life and Rigol-free shack.
 

Online Fraser

  • Super Contributor
  • ***
  • Posts: 13168
  • Country: gb
For information.....

The increase in a thermal cameras optical resolution has already been Tested and proven using the following methods....

1. “Microscan” in BST based cameras where the chopper wheel contains four Germanium prisms that shift the image passing through them slightly in the X and Y directions. This effectively created a set of 4 shifted images that were then combined into one image with higher final optical resolution than the sensor alone could achieve.

2. Piezo electric actuator X and Y axis drive of the sensor array or lens elements. This has been tested and proven for optical resolution improvement and works well in thermal cameras. The mechanics of the system are added complexity and cost of production so this was more of a military application of the piezo electric resolution enhancement technique. Remember, thermal camera sensor pixels are much larger than visible light CCD or CMOS sensor pixels in an SLR camera so the required X and Y axis shift distance is greater.

3. “Super resolution” was used on Testo thermal cameras to help them compete in the market with cameras of higher resolution. The system was purely software based and relied upon “hand shake” by the user. A handheld camera will always be in constant motion due to hand movement. The software uses the movement to increase the optical resolution in the x and y axis. While the Super resolution process does work, it is reliant upon the camera “shaking” so if a tripod is used to support the camera, Super resolution is not possible.  FLIR have now introduced their own version of Super resolution. It is effectively resolution enhancement “on the cheap” and it is limited in its capabilities.


It has been known for manufacturers to confuse potential buyers by stating a cameras resolution that is after the super resolution software technique and not the actual sensor resolution. Whilst not as shady as this approach, FLIR released the FLIR One Gen 3 Pro LT that uses the 80 x 60 pixel Lepton core rather than the 160 x 120 core found in the true Pro model. FLIR then used the super resolution technique to claim 160 x 120 resolution after processing.

Fraser
« Last Edit: June 02, 2020, 11:22:20 am by Fraser »
If I have helped you please consider a donation : https://gofund.me/c86b0a2c
 
The following users thanked this post: SilverSolder

Offline SilverSolderTopic starter

  • Super Contributor
  • ***
  • Posts: 6126
  • Country: 00
Thank you for the comprehensive reply @Fraser.   I was chuffed at how good the results were from "manual super-resolution".

This method definitely works better than just stacking and averaging images without moving the camera, especially if the sensor is very low resolution and the pixels are not uniformly sensitive.

For comparison, this is what an image looks like stacked and averaged (on tripod), you can definitely see the performance of each pixel:




Movement makes an amazing difference to get really smooth results.  Maybe some kind of Arduino project to move one of these cheap Seeks would have its uses...
« Last Edit: June 02, 2020, 11:14:44 am by SilverSolder »
 

Offline Rerouter

  • Super Contributor
  • ***
  • Posts: 4694
  • Country: au
  • Question Everything... Except This Statement
Most phone cameras already make use of this, the fact your hand is unstable, they then rip the photo down to a point cloud, stack them with all the varying orientations and angles unwrapped, then reproduce as a single image using other filters to reduce any statistically outlying pixel values.

The fact these images are not say a perfect half pixel apart all the time means you do not always get extra resolution evenly over the entire picture, more you have areas where you do, and a higher confidence value for the areas that are more sparse.
« Last Edit: June 02, 2020, 11:20:07 am by Rerouter »
 

Offline SilverSolderTopic starter

  • Super Contributor
  • ***
  • Posts: 6126
  • Country: 00
I doubt that resolution can be improved by only averaging registrated frames. While registration of moved frames can be very useful to reduce noise and detector nonuniformity, the result is likely to be a low noise but blured image without improvement of resolution. Some more complicated algorithms are needed to recover the information under the blur. You may search for multiframe superresolution or video superresolution to find out more.

List of some easy start point with opensource code (and pretrained models) to copy ;)
Bilateral TV-L1: classical algorithm derived by formulating the problem as a regularized optimization problem, not producing really good result in my tests, implemented in OpenCV (no python binding) https://docs.opencv.org/master/d7/d0a/group__superres.html https://github.com/opencv/opencv/blob/master/samples/gpu/super_resolution.cpp
PFNL&FRVSR: DL based algorithm, implemented in Tensorflow 1.x, https://github.com/psychopa4/PFNL
EDVR: DL based algorithm, implemented in PyTorch, claiming SOTA performance but produces more artifacts than FRVSR in my tests, need to hack in the dcn module to make inference work on CPU. https://github.com/open-mmlab/mmsr

As superresolution is a hot topic in today's ML community, new algorithms are published periodically, check here for updates.
https://github.com/ChaofWang/Awesome-Super-Resolution
This repo also looks great but I never get it working on my machine for some reason
https://github.com/LoSealL/VideoSuperResolution

As you can see in the original post, the resolution and noise was improved considerably by averaging 16 re-aligned frames shot from (slightly) different positions.

I didn't use software to register the frames, I aligned them by hand because none of my standard photo editing/stacking software was able to auto-align the original images, presumably due to the low resolution and lack of hard edges to work with.  I'll take a look at your links, maybe there is something better out there!
 

Offline Rerouter

  • Super Contributor
  • ***
  • Posts: 4694
  • Country: au
  • Question Everything... Except This Statement
For a fast stitcher, your probably looking for Microsoft ICE, its pretty good at aligning and extracting some extra detail, but it does not really let you tweak any of the dials.
 

Offline SilverSolderTopic starter

  • Super Contributor
  • ***
  • Posts: 6126
  • Country: 00
Most phone cameras already make use of this, the fact your hand is unstable, they then rip the photo down to a point cloud, stack them with all the varying orientations and angles unwrapped, then reproduce as a single image using other filters to reduce any statistically outlying pixel values.

The fact these images are not say a perfect half pixel apart all the time means you do not always get extra resolution evenly over the entire picture, more you have areas where you do, and a higher confidence value for the areas that are more sparse.

Sounds cool!   8)  This is probably later generation camera software?  (I use Open Camera, not sure it does this).

The concept still works even if the images are many pixels apart.  It seems the "secret sauce" is that different pixels (photoreceptors) on the sensor are used to view the same point on the image in each of the different images in the series. 

The whole problem is reduced to "just" aligning and averaging the sequence - which is not a trivial problem at all, depending on the subject!  -  Maybe on a phone, you can be helped by accelerometer and gyroscope readings?
 

Offline Rerouter

  • Super Contributor
  • ***
  • Posts: 4694
  • Country: au
  • Question Everything... Except This Statement
In a phone its more like a breakdown of the IMU output, it knows roughly where it is moving relative in 3D space (say between cm-mm over a few seconds with no massive accelerations) so it can use that information as a starting guess for matching, which makes it an easier job to align the images.
 

Offline SilverSolderTopic starter

  • Super Contributor
  • ***
  • Posts: 6126
  • Country: 00
For a fast stitcher, your probably looking for Microsoft ICE, its pretty good at aligning and extracting some extra detail, but it does not really let you tweak any of the dials.

I just tested ICE,  and it seems a competent panorama tool,  but it did not average the images the way we need it to do.  So far,  Photoshop and manual alignment gives the best results...
 

Offline Ultrapurple

  • Super Contributor
  • ***
  • Posts: 1027
  • Country: gb
  • Just zis guy, you know?
    • Therm-App Users on Flickr
Microsoft Image Composite Editor does not average or otherwise enhance resolution. You'll have to do that in a different package (eg AutoStakkert!). I use Video Enhancer 2 for superresolution if I'm using a camera that's not supported by ThermViewer (which has superresolution built in and it works wonderfully well).

It is possible to make exceptionally good images with patience, 'jiggle' superresolution and ICE - click for access to full size versions of these sample images (via the underlined down-arrow on the Flickr page you'll be taken to):



(original is 6241x2886)



(original is 3400x2634)



(original is 9176x4362, that's 40 megapixels)



(original is 2990x1817)


And here's an early video of mine that used a 640x480 sensor and post-processing resolution enhancement to 1280x960 - the manufacturer was sufficiently impressed to use the video as a marketing tool).
https://www.flickr.com/photos/ultrapurple/37626498590/
« Last Edit: June 04, 2020, 12:19:32 pm by Ultrapurple »
Rubber bands bridge the gap between WD40 and duct tape.
 

Offline zrq

  • Frequent Contributor
  • **
  • Posts: 278
  • Country: 00
To make panorama photos, I prefer Hugin, which is open source and multiplatform. Of course, all these panorama tools cannot improve the resolution beyond the Nyquist limit as they are simply stitching frames together and doing some affine transform.
I doubt the superresolution in ThermViewer is really doing superresolution, I think it's just bicubic interpolation.
« Last Edit: June 21, 2020, 02:34:55 pm by zrq »
 

Offline agiorgitis

  • Regular Contributor
  • *
  • Posts: 61
  • Country: 00
Flir has a similar approach, called UltraMax, where it stacks many photos and then the PC software combines them together for a higher resolution image.
The manual clearly says that in order for this to work properly, you must hold the camera in hand and not mount it on a tripod.
And it works by the minor movements the hand is doing while taking a thermal photo.

It is indeed working great  :-+
And then if you pass the thermal image via waifu2x.udp.jp it gets even better. (though you lose radiometric data, so it's used only for image clarity)
 

Offline SilverSolderTopic starter

  • Super Contributor
  • ***
  • Posts: 6126
  • Country: 00

I guess the benefits of using a tripod are still worth having...  you don't have to move the camera very far to displace the image by a few pixels!

 

Offline Ultrapurple

  • Super Contributor
  • ***
  • Posts: 1027
  • Country: gb
  • Just zis guy, you know?
    • Therm-App Users on Flickr
@zrq As far as I know ThermViewer does rather more than just interpolation. It definitely provides better pictures from a shaky hand than it does from a locked-off tripod and I don't believe Jinhua would have written an interpolation routine that used a G sensor to detect wobble and thus turn the interpolation on and off!

When I have a few minutes I'll do a comparison between hand-held and tripod shots with superresolution turned on and off.
Rubber bands bridge the gap between WD40 and duct tape.
 

Offline SilverSolderTopic starter

  • Super Contributor
  • ***
  • Posts: 6126
  • Country: 00
@zrq As far as I know ThermViewer does rather more than just interpolation. It definitely provides better pictures from a shaky hand than it does from a locked-off tripod and I don't believe Jinhua would have written an interpolation routine that used a G sensor to detect wobble and thus turn the interpolation on and off!

When I have a few minutes I'll do a comparison between hand-held and tripod shots with superresolution turned on and off.

That would be interesting.  -  These techniques make more difference the lower the resolution of the camera.  E.g. imagine a 1 pixel camera that you are moving around to shoot a subject...   it would take a while, but it is theoretically possible to get any resolution you have the patience to get!  -  not for action shots, obviously...  :D
 

Online Fraser

  • Super Contributor
  • ***
  • Posts: 13168
  • Country: gb
Scanning thermal cameras were capable of high resolution that was only limited by the mirror movement control and sampling rate of the detector. When an adequate sampling rate was not available, multiple detector pixels were employed and a stripe scan was possible. It was in such systems that piezo electric servos were also used to move the linear row of pixels. Staggered and multiple pixel rows were also used. There was a lot of research into increasing the resolution of thermal imaging cameras in the period before large pixel count staring arrays were common. One design I saw looked very complex and whilst still a scanning type thermal camera, it used multiple separate sets of linear stripe detector elements. The optical path to the different detector arrays was complex. The output of the detectors was combined in the image processing electronics and a high quality image was produced.

Before we had had the benefit of staring FPA’s, both cooled and uncooked types, the R&D chaps were always trying to find new ways to increase thermal camera performance for military applications. The military needed such technology and had the required budget. Some designs offered very high performance but were both large and extraordinarily expensive to manufacture. Most were classified technology and were destroyed at the end of their operational life. It was amazing what could be achieved with a single cooled thermal detector pixel if coupled to the right optics and image processing electronics. When multi pixel stripes, staggered stripes and small arrays were employed, performance increased even further. Then the large cooled staring arrays were manufactured, as found in the AGEMA THV550 that I own, and life got better and better for the R&D teams working for the Military. Whilst a FPA cooler was still required, the image information coming from the QVGA staring FPA was amazing and could be further enhanced via the optical block if required. What was lost though, was the drive to invent and cost reduce clever optical systems that created high resolution images using relatively low pixel count detectors arrays. The new FPA’s were so much easier to employ, and ultimately lead to a lower cost camera solution.

One of the reasons why I always wanted the AGEMA Thermovision THV550 in my collection was because it was a significant marker in thermal imaging camera development. It was compact, self contained and used a Stirling cooled QVGA staring array coupled to compact optics. Compared to the likes of the Thermovision THV470 shoulder camera it was amazing miniaturisation. I love the THV550 and it lead to me having a huge soft spot for all the AGEMA/FLIR PM series cameras that followed using the same basic case design. I have bought rather a lot of those PM series cameras over the years  ;D

Fraser
« Last Edit: June 05, 2020, 02:12:40 pm by Fraser »
If I have helped you please consider a donation : https://gofund.me/c86b0a2c
 
The following users thanked this post: SilverSolder, I wanted a rude username

Offline railrun

  • Regular Contributor
  • *
  • Posts: 113
 

Online Fraser

  • Super Contributor
  • ***
  • Posts: 13168
  • Country: gb
Mechanical scanning thermal camera using parts of an IR thermometer. Quite an elegant solution that addresses the issue of IFOV.

https://circuitdigest.com/project/arduino-thermal-imaging-camera

The designer is a fellow Forum member  :-+

https://www.eevblog.com/forum/projects/diy-scanning-thermal-camera/

Fraser
« Last Edit: June 05, 2020, 01:48:27 pm by Fraser »
If I have helped you please consider a donation : https://gofund.me/c86b0a2c
 

Online Fraser

  • Super Contributor
  • ***
  • Posts: 13168
  • Country: gb
For anyone interested in the design of thermal imaging cameras in the period before uncooked a FPA’s were common, I can recommend a book for you :)

It is called “Thermal Imaging Systems” and the author is J.M.Lloyd. This book dates back to 1975 and covers a broad spectrum of thermal imaging related topics. It details the various scanning camera designs and delves into the issues with thermal imaging systems of the period. A great read for those who wish to learn about early thermal imaging systems and how the challenge of creating decent images with only a single detector Pixel was achieved. We can all learn from history and this is no exception.

https://www.amazon.com/Thermal-Imaging-Systems-Optical-Engineering/dp/0306308487/ref=sr_1_1?dchild=1&keywords=Thermal+imaging+systems+lloyd&qid=1591366692&sr=8-1

I liked this book so much that I bought three of them in case they start to fall apart ! I bought all of mine from the USA for just a few Dollars. Do not overpay. Amazon have used ones listed from around $8 but new prints cost a small fortune ! This is a well respected reference book that anyone serious about understanding thermal camera design should seriously consider adding to their book shelf  :-+

Fraser
« Last Edit: June 05, 2020, 02:30:22 pm by Fraser »
If I have helped you please consider a donation : https://gofund.me/c86b0a2c
 

Offline zrq

  • Frequent Contributor
  • **
  • Posts: 278
  • Country: 00
@zrq As far as I know ThermViewer does rather more than just interpolation. It definitely provides better pictures from a shaky hand than it does from a locked-off tripod and I don't believe Jinhua would have written an interpolation routine that used a G sensor to detect wobble and thus turn the interpolation on and off!

When I have a few minutes I'll do a comparison between hand-held and tripod shots with superresolution turned on and off.

I have strong evidence that Thermviewer is only doing BicubicInterpolation rather than superresolution by looking into the code itself.

Another interesting superresolution algorithm by Google:
https://dl.acm.org/doi/10.1145/3306346.3323024
https://github.com/kunzmi/ImageStackAlignator
 

Offline SilverSolderTopic starter

  • Super Contributor
  • ***
  • Posts: 6126
  • Country: 00
@zrq As far as I know ThermViewer does rather more than just interpolation. It definitely provides better pictures from a shaky hand than it does from a locked-off tripod and I don't believe Jinhua would have written an interpolation routine that used a G sensor to detect wobble and thus turn the interpolation on and off!

When I have a few minutes I'll do a comparison between hand-held and tripod shots with superresolution turned on and off.

I have strong evidence that Thermviewer is only doing BicubicInterpolation rather than superresolution by looking into the code itself.

Another interesting superresolution algorithm by Google:
https://dl.acm.org/doi/10.1145/3306346.3323024
https://github.com/kunzmi/ImageStackAlignator

These ideas basically boil down to taking an image stack really quickly -  but if you have plenty of time, you can just "move and snap" with whatever modest camera you already own.

The real issue is finding good focus stacking software that can align thermal images accurately and take an average.  Photoshop can align and average a stack as shown in the original post, but its alignment algorithm is not up to the task with soft, noisy thermal images (but would probably work fine with better-than-lowest-end cameras).   It is far too much work, basically...   



 

Offline Vipitis

  • Frequent Contributor
  • **
  • Posts: 867
  • Country: de
  • aspiring thermal photography enthusiast
For me Zerene Stacker has done the alignment and focus stacking really well. If you want to stack for resolution and noise, maybe take the aligned output and drop it into astro software.
The method linked above is meant for RGB images and wouldn't translate that well into monochrome, but the approach is valid.
 

Offline Ultrapurple

  • Super Contributor
  • ***
  • Posts: 1027
  • Country: gb
  • Just zis guy, you know?
    • Therm-App Users on Flickr
On the Therm-App Pro with ThermViewer:

I set up a Raspberry Pi 3 B as my test target.

I set my Therm-App Pro (640x480 native) going and allowed it time to reach equilibrium.

I then made images

- from a tripod, native 640 x 480
 
- from a tripod, with ThermViewer superresolution to 1280 x 960

- hand-held, with Thermviewer superresolution to 1280 x 960.

The camera was not moved between triopd shots and should be quite stable; the hand-held shot was made from as near as possible the same place. On close inspection I am not convinced the hand-held shot was in perfect focus.

I also prepared some further images from the original tripod 640 x 480 image:

- pixel doubled to 1280 x 960 (using PSP 10)

- bicubic interpolation to 1280 x 960 (using PSP 10).

These images are available from the bottom of this post.

This summary image shows the results, made by cropping a similar area of the board.



- the pixel doubled image is very blocky

- the bicubic interpolated image is much less blocky and looks reasonable

- the tripod shot with superresolution enabled is not as good as the bicubic resize

- I am not convinced one way or the other by the hand-held superresolution image, though my feeling is that it is probably a little better than the tripod superresolution shot.


Overall, I believe the superresolution process appears to work better when hand-held rather than locked-off on a tripod, but the results are closer than I expected.

Perhaps Jinhua might see this one day and comment whether he used true superresolution or some other kind of interpolation.

Separately, I have had very good results with putting native resolution videos through Video Enhancer 2's superresolution algorithm. When combined with tweaks to gamma, gain and so on, the resulting video can be very good indeed.

Rubber bands bridge the gap between WD40 and duct tape.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf