Further trials of multi-frame/video superresolution for thermal images.
Original video taken with Xtherm T3s, 384x288, processed with the original code and parameters of [1] (
www.cse.cuhk.edu.hk/~leojia/projects/mfsr/index.html). This algorithm does not use machine learning.
Certainly such methods can help recover some details, but it took ~12 hours for my computer to finish the reconstruction. Although >7 hours are spent on optical flow estimation, which should be much faster on GPU.
[1] Ma, Ziyang, et al. "Handling motion blur in multi-frame super-resolution." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
attachments are one of the original video frames and the processed (3rd iteration) image
Edit: Just realized the input frame size is huge... It seems Xtherm app is saving the videos with 1408x1068 resolution for some reason or stupidity. The reconstruction should be much faster if we feed correct 384x288 frames to the program.
Second Edit: Tried to use thermviewer app to capture the video and it can be saved correctly in 384x288, however, the result looks horrible after the reconstruction with the parameters in the original code. I'll try playing with the parameters when I got time.