I know that the final output is upscaled 4x (from 160x120 to 640x480) via the MSX technique that combines the visible light image and LWIR image, and that a 2x upscaled (to 320x240) image is stored as the embedded "raw" image in the JPEG file. But at what points are these upscalings performed? I assume that the the 4x MSX upscaling is performed in software (the actual smartphone app), as the FLIR Lepton does not contain the visible light camera, and therefore the Lepton chip doesn't have access to the visible light image to perform MSX scaling in-hardware. However the 2x upscaling of the LWIR only image COULD be accomplished in-hardware, but I don't know if it is.
Does anybody here know at what point the LWIR image is 2x upscaled? If it's done in-software, I know of an easy way to get around it. Buy a copy of the FLIR One SDK, write an app which simply takes the raw LWIR image, and saves it to a 16bit grayscale TIFF file. This would then make the TRULY raw data easily accessible to any Windows PC with software such as Photoshop. However, if the 2x is performed in the Lepton chip itself, then there's no way around it.
If there's a microcontroller in the FLIR One, and the 2x upscale is performed in the microcontroller, it would be more difficult, but a hardware mod would be theoretically possible. You'd need to desolder the microcontroller (or unplug it from its socket if it uses one), connect it to a microcontroller programmer device, dump the program to ASM code, rewrite the applicable parts of the assembly code program, re-assemble the code, upload it to the microcontroller, and then re-connect (resolder it or plug it back into its socket if it uses one) the microcontroller.