Do you know, that all sensor values in radiometric mode are not really RAW values, got from ADC!
Also I wrote in your thread, that the raw values send over USB divided by factor 4:
I found a simple solution to check the factor 4 between "Lepton RAW" and "Flir radiometric jpg RAW":
The tea bag simulator of Flir SDK use real Lepton sensor values. You found it in the SDK file sampleframes.zip (6MB).
I made with the SDK App a shot from the (random) tea bag frame 00051.
Thereby we got a Flir radiometric jpg with an embedded calibrated RawThermalImage with a good temperature range from min/max=34°C/65°C ;)
(the unrealistic min temperature is a result of bad calibration values for the simulator)
first some image magick steps with the lepton sensor values of frame 00051 (file 00051-lep) from SDK sampleframes.zip
("-chop 2x0+0+0 -chop 2x0+80+0" removes the 4 additional vertical lines with extra unknown informations)
// convert sample frame 00051 from SDK
> convert -depth 16 -size 164x120 gray:00051-lep gray:- | convert -depth 16 -endian lsb -size 164x120 gray:- -chop 2x0+0+0 -chop 2x0+80+0 -rotate 90 00051-lep.png
// multiply with factor 4
> convert 00051-lep.png -fx "4*u" 00051-lep4x.png
// get channel statistics
> identify -verbose 00051-lep4x.png
Channel statistics:
Pixels: 19200
Gray:
min: 14220 (0.216983)
max: 20840 (0.317998)
mean: 15014.9 (0.229113)
standard deviation: 1218.95 (0.0186)
kurtosis: 5.54453
skewness: 2.30479
and now compare the statistics with the "real" Flir radiometric image from the SDK app
// extract RawThermalImage from a Flir radiometric image
> exiftool -b -RawThermalImage FLIROne-2016-01-18-10-33-07+0100.jpg > 00051exif.png
// image is postprocessed and resized to 320x240
> convert 00051exif.png -resize 160x FlirAppRAW120x160.png
// get channel statistics
>identify -verbose FlirAppRAW120x160.png
...
Channel statistics:
Pixels: 19200
Gray:
min: 14266 (0.217685)
max: 20831 (0.317861)
mean: 15021.4 (0.229212)
standard deviation: 1221.27 (0.0186354)
kurtosis: 5.44004
skewness: 2.28451
By deducting the Flir post processing steps to the original RAW values (see patterns below) we got a great result!
A sample:
max-min=65-34=31°C
max-min=20831-14266=6565 digits
-> 1 digit = 31/6565=0.0047 Kelvin
The difference between mean in this images is:
(15021.4-15014.9)*0.0047 Kelvin= 0.031 Kelvin :-+
You also see, that the last digit ADC "resolution" of Lepton in this range is > 0.0047*4 = 19mK (useless because of strong noise, see second image below with 4 Kelvin scale)
here is a real live sample from Flir One G2 (shot after a small warm up time of about 2 minutes):
I saved with the SDK.app simultaneously a upscaled Flir Radiometric JPG and a real Lepton ThermalLinearFlux14BitImage.
Afterwards I rebuild with my old panorama script (see my footer) a real size 160x120 Lepton radiometric jpg (a Flir format).
You can load this sample jpg images in Flir Tools and compare the quality.
First a original image shot with the Flir App.
The App crop >:( the Lepton sensor to about 120x90 Pixel.
Please note the artefacts/patterns!
Flir makes a nice lens distortion correction of the Lepton sensor for best MSX overlaying ;)
(https://www.eevblog.com/forum/testgear/flir-one-thermal-imaging-camera-teardown-and-hacks/?action=dlattach;attach=182409;image)
real Lepton sensor 160x120 (no image postprocessing and with noise/grain because the temperature spread is only 4 Kelvin)
(https://www.eevblog.com/forum/testgear/flir-one-thermal-imaging-camera-teardown-and-hacks/?action=dlattach;attach=182411;image)