-
#675 Reply
Posted by
datsthat
on 04 Nov, 2014 20:37
-
Seek Thermal has been following this thread with great interest. We would like to be as transparent as possible, realizing that competition may use this as ammunition, but we believe that in the end we will be helped far more than hurt by an open and honest exchange.
Seek Thermal Inc. has been built from the ground up bring affordable IR sensors to the commercial market.
We greatly appreciate the professional attitude and creative troubleshooting your collaborators have demonstrated. We are actively reviewing our product to confirm your findings. Identifying these issues early in our production cycle gives us a good opportunity to implement improvements when appropriate. With the low cost of our camera some compromises need to be made between performance and cost. We will be looking for cost effective improvements to address some of the issues you have identified.
Epoxy invasion. The good news is that our lens attachment process is fully automated. Thus the process ‘should’ be well controlled and any corrective action should be effective with low variability.
We image test every detector visually before shipment, so the worst units will be screened out. Our experience is that anything under the shutter will be almost perfectly removed by a Flat Field Calibration.
Thermal Gradient over time, We are actively investigating possible improvements to this issue. No resolution or definite direction yet. Note that for ‘relative’ thermography where the spot is fixed in the center of the display, we expect thermography to retain its ‘relative’ accuracy.
Dark Pixels. No great mystery here. Every 15th pixel is intentionally blanked to avoid a potential patent infringement. Seek has an updated design for future product that eliminate the need for this measure. With the effective blur length of a 12 micron pixel resolving 8-13 micron Radiation, loss of single isolated pixels does not (in itself) degrade image quality.
Thank you again for your interest in our product, we look forward to continuing our dialogue with the community.
Best,
The Seek Thermal Team
When will you release the update that resolves the gradient issue? Thanks
-
-
In our first post we thanked the collaborators in this forum for their professional and helpful comments and suggestions. Your response since has been even more impressive. We are grateful.
Thermal Gradient: We have been able to reproduce the thermal gradient effect that several people have reported. We are now working on a software solution and will incorporate it into an app update within the next month.
One goal for the Seek camera has been ease of operation since we are positioning it as the first general consumer thermal camera. Thus, while some have suggested a user triggered 'Secondary’ calibration, that is an extra step that could require a significant amount of user education for nonprofessionals, and lead to frustration when the gradient returns. In that spirit we are focused on a fully automatic compensation solution in the upcoming application update.
Thanks again,
Seek Technical Team
-
-
----------- NOV 3 --------------------
I hope some of the seek engineers are still following this thread, because I have a pretty good question for them...
Why is there a ghost image that slowly reappears strongest right before a flat field event? It doesn't have to be an intentional image (like holding the shutter open during a flat field to see it), I notice it creeps back in after a calibration, no matter what it is. Sometimes the flatfield image shows up as blocky hotspots, and it gets hotter right before a fresh flat. Whatever the sensor looked like during the calibration, that image slowly appears right before a new flat field. Even fixed pattern noise shows up. What...is going on?!
I just want to know the math involved in how you subtract the flat from each frame. I don't think its a trade secret or anything, but its clearly some kind of bug. I know sometimes I get 5 frames on a flatfield, sometimes it's only one frame. I tested this while waving the camera with the shutter forced open.
-------------- END POST. -----------
Please confirm that this only occurs when you interfere with the shutter?
If you interfere with the shutter you can confuse the temporal drift algorithm.
Thanks,
Seek Technical Team
-
-
Thanks for the continuing replies Seek Thermal! Just for sanity sake, can you give more info as to the cause of the gradient issue?
Without giving information that you can't obviously. I know a few members here would greatly appreciate knowing the real cause.
I know one in particular that really put in some work on the problem!
Thanks again Seek Thermal, Keep heading in the right direction !!
-
#679 Reply
Posted by
miguelvp
on 04 Nov, 2014 22:44
-
----------- NOV 3 --------------------
I hope some of the seek engineers are still following this thread, because I have a pretty good question for them...
Why is there a ghost image that slowly reappears strongest right before a flat field event? It doesn't have to be an intentional image (like holding the shutter open during a flat field to see it), I notice it creeps back in after a calibration, no matter what it is. Sometimes the flatfield image shows up as blocky hotspots, and it gets hotter right before a fresh flat. Whatever the sensor looked like during the calibration, that image slowly appears right before a new flat field. Even fixed pattern noise shows up. What...is going on?!
I just want to know the math involved in how you subtract the flat from each frame. I don't think its a trade secret or anything, but its clearly some kind of bug. I know sometimes I get 5 frames on a flatfield, sometimes it's only one frame. I tested this while waving the camera with the shutter forced open.
-------------- END POST. -----------
Please confirm that this only occurs when you interfere with the shutter?
If you interfere with the shutter you can confuse the temporal drift algorithm.
Thanks,
Seek Technical Team
I think the user reporting that was messing with the shutter, in that case the camera picks up the scene as the reference image, so if you move it slightly or a lot you'll see the ghost image.
So don't interfere with the shutter and complain it's not working
Say I force the shutter open and point the camera to an object that generates heat, if after the calibration I point it to a flat field it will subtract the first scene from the flat field. So it's doing what is supposed to be doing.
-
#680 Reply
Posted by
Fraser
on 04 Nov, 2014 22:46
-
Great to see Seek Thermal continuing to engage via this forum. Top marks.
I know the SEEK is not open source but from what I have seen in this thread recently it is already leading to some excellent software development work that will likely benefit all
Very much looking forward to receiving my SEEK camera.
I second the request to know just a little more about the cause of the temperature gradient as some owners, me included, might wish to tackle it through a hardware modification.
I am fine with a PM if it is not the sort of thing you want disclosed on a very public forum.
Aurora
-
#681 Reply
Posted by
miguelvp
on 04 Nov, 2014 22:52
-
My request to the Seek Thermal team is if there is a timeline for the SDK to be available.
http://www.thermal.com/developers.htmlAlso maybe some details on what the SDK supports and what the aim is. Skinning only or actual access to the device?
Thanks.
-
#682 Reply
Posted by
callipso
on 04 Nov, 2014 22:55
-
Thus, while some have suggested a user triggered 'Secondary’ calibration, that is an extra step that could require a significant amount of user education for nonprofessionals, and lead to frustration when the gradient returns.
With all due respect this is what makes today's computer tech unusable. With UI being replaced by UX and settings which would "get in the way of either designing a minimalistic interface or maybe could confuse some slower users" simply get removed, nothing can be really customized, all configuration is left to the engineers or worse the marketing people and users have to stick it out...
Why not make two flavours of the program (or APP, as is popular to say today) or maybe include a submenu (with a warning) for the "pros"?
Censorship is not allowing a man to have a steak because a baby can't chew it.
--Mark Twain
(I run Linux on my computers and have long thought this wouldn't get to me, and then Gnome 3 came out and with each and every major update more settings went missing...)
-
-
Thus, while some have suggested a user triggered 'Secondary’ calibration, that is an extra step that could require a significant amount of user education for nonprofessionals, and lead to frustration when the gradient returns.
With all due respect this is what makes today's computer tech unusable. With UI being replaced by UX and settings which would "get in the way of either designing a minimalistic interface or maybe could confuse some slower users" simply get removed, nothing can be really customized, all configuration is left to the engineers or worse the marketing people and users have to stick it out...
Why not make two flavours of the program (or APP, as is popular to say today) or maybe include a submenu (with a warning) for the "pros"?
Censorship is not allowing a man to have a steak because a baby can't chew it.
--Mark Twain
(I run Linux on my computers and have long thought this wouldn't get to me, and then Gnome 3 came out and with each and every major update more settings went missing...)
It's not uncommon for software to have an "advanced" mode for additional functions that might confuse stupid people
-
#684 Reply
Posted by
callipso
on 04 Nov, 2014 23:17
-
Thus, while some have suggested a user triggered 'Secondary’ calibration, that is an extra step that could require a significant amount of user education for nonprofessionals, and lead to frustration when the gradient returns.
With all due respect this is what makes today's computer tech unusable. With UI being replaced by UX and settings which would "get in the way of either designing a minimalistic interface or maybe could confuse some slower users" simply get removed, nothing can be really customized, all configuration is left to the engineers or worse the marketing people and users have to stick it out...
Why not make two flavours of the program (or APP, as is popular to say today) or maybe include a submenu (with a warning) for the "pros"?
Censorship is not allowing a man to have a steak because a baby can't chew it.
--Mark Twain
(I run Linux on my computers and have long thought this wouldn't get to me, and then Gnome 3 came out and with each and every major update more settings went missing...)
It's not uncommon for software to have an "advanced" mode for additional functions that might confuse stupid people
I've seen this twice in my life, none being a mainstream consumer software. I often see stuff you don't use frequently buried somewhere in a menu, but the simpleton/proper user approach remains rare to me. The trend towards the simpleton-only approach is alarming.
-
#685 Reply
Posted by
Fraser
on 04 Nov, 2014 23:19
-
Awww Mike, you are being unusually harsh .....stupid people indeed !
There is a time and a place for simplified menu structures, or even no menus at all. Fire fighting thermal cameras are an example of such....just point and view. This can be very good if the automatic functionality is well formed and effective in most situations. I do agree that too much automatic control and not enough manual override is limiting with a thermal camera though.
Digital camera manufacturers got around the issue easily with auto modes and a manual mode for the more experienced photographer. I am not saying auto mode is for stupid people though.... it is for people who just want a pretty picture with minimum hassle
Aurora
-
#686 Reply
Posted by
Rasz
on 04 Nov, 2014 23:24
-
It's not uncommon for software to have an "advanced" mode for additional functions that might confuse stupid people
no, its INDUSTRY STANDARD to have separate advanced/manual menu, every frickin point and shoot digital camera on the market has some kind of manual menu. It sits unused on 99.9% of cameras because like you said average potato people are too stupid to use it, but its there nonetheless.
software (OSes mainly? maybe apple software in general lately) seems to be the exception, constantly dumping down UI, catering to the lowest common denominator and making it less usable in the process.
-
#687 Reply
Posted by
callipso
on 04 Nov, 2014 23:40
-
It's not uncommon for software to have an "advanced" mode for additional functions that might confuse stupid people
no, its INDUSTRY STANDARD to have separate advanced/manual menu, every frickin point and shoot digital camera on the market has some kind of manual menu. It sits unused on 99.9% of cameras because like you said average potato people are too stupid to use it, but its there nonetheless.
software (OSes mainly? maybe apple software in general lately) seems to be the exception, constantly dumping down UI, catering to the lowest common denominator and making it less usable in the process.
What I meant was that not many *ware has the average Joe/pro MODES, not menus (with cameras being a third thing about which I had forgotten). Also I stand behind my statement that most consumer oriented stuff dumps options. Have you tried Gnome 3? Messed up, man.
Also sorry for slightly askewing from the main topic of this thread, but it's past midnight in here and I haven't slept for over 20 hours, so there...
-
-
Seek thermal team,
What I was referring to was that a ghost image of a previous flat field calibration persists into the scene, even after a new flat field. If you are telling me this is a temporal drift algorithm, does it factor in previous calibration frames? This would explain the ghosting I see on fresh calibrations.
To further explain, I am aware of the inverse image you get from subtracting a scene with warm objects. I'm actually trying to say the inverse areas persist into the next set of calibrated frames.
-
#689 Reply
Posted by
Fry-kun
on 05 Nov, 2014 02:21
-
Did anyone make a linux driver and/or capture app yet?
-
#690 Reply
Posted by
cynfab
on 05 Nov, 2014 03:09
-
Funny you should ask, I've been poking at this for a couple of days. I've written a Python program which uses PyUSB to capture calibration and Image Frames from the Seek Imager.
# You will need to have python 2.7 (3+ may work)
# and PyUSB 1.0
# and PIL 1.1.6 or better
# and numpy
# and scipy
# and ImageMagick
# Many thanks to the folks at eevblog, especially (in no particular order)
# miguelvp, marshallh, mikeselectricstuff, sgstair and many others
# for the inspiration to figure this out
# This is not a finished product and you can use it if you like. Don't be
# surprised if there are bugs as I am NOT a programmer..... ;>))
import usb.core
import usb.util
import sys
import Image
import numpy
from scipy.misc import toimage
# find our Seek Thermal device 289d:0010
dev = usb.core.find(idVendor=0x289d, idProduct=0x0010)
# was it found?
if dev is None:
raise ValueError('Device not found')
# set the active configuration. With no arguments, the first
# configuration will be the active one
dev.set_configuration()
# get an endpoint instance
cfg = dev.get_active_configuration()
intf = cfg[(0,0)]
ep = usb.util.find_descriptor(
intf,
# match the first OUT endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_OUT)
assert ep is not None
# Deinit the device
msg= '\x00\x00'
assert dev.ctrl_transfer(0x41, 0x3C, 0, 0, msg) == len(msg)
assert dev.ctrl_transfer(0x41, 0x3C, 0, 0, msg) == len(msg)
assert dev.ctrl_transfer(0x41, 0x3C, 0, 0, msg) == len(msg)
# Setup device
#msg = x01
assert dev.ctrl_transfer(0x41, 0x54, 0, 0, 0x01)
# Some day we will figure out what all this init stuff is and
# what the returned values mean.
msg = '\x00\x00'
assert dev.ctrl_transfer(0x41, 0x3C, 0, 0, msg) == len(msg)
ret1 = dev.ctrl_transfer(0xC1, 0x4E, 0, 0, 4)
ret2 = dev.ctrl_transfer(0xC1, 0x36, 0, 0, 12)
#print ret1
#print ret2
#
msg = '\x20\x00\x30\x00\x00\x00'
assert dev.ctrl_transfer(0x41, 0x56, 0, 0, msg) == len(msg)
ret3 = dev.ctrl_transfer(0xC1, 0x58, 0, 0, 0x40)
#print ret3
#
msg = '\x20\x00\x50\x00\x00\x00'
assert dev.ctrl_transfer(0x41, 0x56, 0, 0, msg) == len(msg)
ret4 = dev.ctrl_transfer(0xC1, 0x58, 0, 0, 0x40)
#print ret4
#
msg = '\x0C\x00\x70\x00\x00\x00'
assert dev.ctrl_transfer(0x41, 0x56, 0, 0, msg) == len(msg)
ret5 = dev.ctrl_transfer(0xC1, 0x58, 0, 0, 0x18)
#print ret5
#
msg = '\x06\x00\x08\x00\x00\x00'
assert dev.ctrl_transfer(0x41, 0x56, 0, 0, msg) == len(msg)
ret6 = dev.ctrl_transfer(0xC1, 0x58, 0, 0, 0x0C)
#print ret6
#
msg = '\x08\x00'
assert dev.ctrl_transfer(0x41, 0x3E, 0, 0, msg) == len(msg)
ret7 = dev.ctrl_transfer(0xC1, 0x3D, 0, 0, 2)
#print ret7
#
msg = '\x08\x00'
assert dev.ctrl_transfer(0x41, 0x3E, 0, 0, msg) == len(msg)
msg = '\x01\x00'
assert dev.ctrl_transfer(0x41, 0x3C, 0, 0, msg) == len(msg)
ret8 = dev.ctrl_transfer(0xC1, 0x3D, 0, 0, 2)
#print ret8
#
x=0
while x < 5:
# Send read frame request
msg = '\xC0\x7E\x00\x00'
assert dev.ctrl_transfer(0x41, 0x53, 0, 0, msg) == len(msg)
ret9 = dev.read(0x81, 0x3F60, 1000)
ret9 += dev.read(0x81, 0x3F60, 1000)
ret9 += dev.read(0x81, 0x3F60, 1000)
ret9 += dev.read(0x81, 0x3F60, 1000)
# Let's see what type of frame it is
# 1 is a Normal frame, 3 is a Calibration frame
# 6 may be a pre-calibration frame
# 5, 10 other... who knows.
status = ret9[20]
if status == 1:
# Convert the raw calibration data to a string array
calimg = Image.fromstring("I", (208,156), ret9, "raw", "I;16")
# Convert the string array to an unsigned numpy int16 array
im2arr = numpy.asarray(calimg)
im2arrF = im2arr.astype('uint16')
if status == 3:
# Convert the raw calibration data to a string array
img = Image.fromstring("I", (208,156), ret9, "raw", "I;16")
# Convert the string array to an unsigned numpy int16 array
im1arr = numpy.asarray(img)
im1arrF = im1arr.astype('uint16')
# Subtract the calibration array from the image array and add an offset
additionF = (im1arrF-im2arrF)+ 800
# convert to an image and display with imagemagick
toimage(additionF).show()
x = x + 1
Many thanks to the folks at eevblog, especially (in no particular order)
miguelvp, marshallh, mikeselectricstuff, sgstair and many others
for the inspiration to figure this out
This is not a finished product and you can use it if you like. Don't be
surprised if there are bugs as I am NOT a programmer..... ;>))
This works for me since my Samsung S4 Mini doesn't work with the Seek App, and while a friend's Nexus 7 (2013) works ok, my Nexus 7 (2012) does not.
This is my first Python program, and there may be lots left to do to make it more useful, but it works on my Ubuntu 14.04 box.
The attached images were saved from ImageMagick, the second after doubling the size and doing a "reduce noise" with radius of 3
...ken...
-
#691 Reply
Posted by
miguelvp
on 05 Nov, 2014 09:25
-
So I've been experimenting ignoring the calibration frames and forcing a visual frame as the calibration one.
Before I was adjusting the calibrated visual one with the calibrated shutter one before applying it.
What it's interesting is that if I point the camera to a different heat source than the reference one I only get the reference pattern pixels to be the same, all the other pixels are different.
This means that we don't have dead pixels just very unresponsive ones, which can be adjusted and make use of to some degree.
No pictures just did a lot of debugging and looking at the arrays with conditional breakpoints.
-
-
Dead pixels should produce data to a degree. In a bolometer, there should be *something* there since it's measuring resistsance.
Perhaps you could create a map of all dead pixels and multiply or add it to each frame, basically to bump up its poor sensitivity. But you'll have to differentiate between certain pixels, as in the ones below a threshold are to add to each frame, ones above the threshold are to be subtracted (the over responsive pixels).
-
#693 Reply
Posted by
eneuro
on 05 Nov, 2014 12:14
-
What it's interesting is that if I point the camera to a different heat source than the reference one I only get the reference pattern pixels to be the same, all the other pixels are different.
How much those dead looking sensor pixels close to black does change?
Maybe they return invalid thermal values anyway, so have to be dismissed and could be useless, unless we know there is valid thermal data which needs to be rescalled somehow
Just finished approximation of my favourite thermal
Iron LUT and now it is fully parametric, so can output LUTs at any size 10bit 1024, 14bit 16384, even 16bit 65536
Do not include this Iron parametric 1024 LUT source data, while this is not exact approximation of oryginal Iron 256 showed before, to avoid any confusion.
It is very similar, even more smooth RGB channel colors than in oryginal.
I used a few
http://en.wikipedia.org/wiki/Sigmoid_function and trygonometric and manually fitted together to get perfect parametric aproximation of this thermal Iron LUT.
So, using functions like sigmoid from neural networks "battle fields" was quite good idea, while do not wanted to mess with FFT transformations there.
Now, it is time to try guess what is in those last 207-208 columns in raw sensor data -did anyone tried to find it out?
If all scientific methods fails then we still can try this to crack it
-
#694 Reply
Posted by
frenky
on 05 Nov, 2014 13:58
-
In the router_image1c.fit and router_image2i.fit files the last value (208th) is always 32768 (1000 0000 0000 0000).
The 207th values are just a little apart in this two files:
Line no | cal | img | diffence
1 38301 38298 3
2 37956 37957 -1
3 37668 37670 -2
4 37995 37995 0
5 37661 37667 -6
6 37959 37961 -2
7 38015 38012 3
8 37968 37974 -6
9 37941 37943 -2
10 37962 37963 -1
11 37648 37647 1
Not sure that's important but it could be.
If it was a checksum of some sort then numbers wouldn't be so similar in this two files...
Added later:
Range of values in 207th column (in both files) is exactly from 37501 to 38359. (If you subtract 32768 you get range 4733-5591.)
That is 858 values.
That could be the number of colors in the image...
-
-
I don't see how they would need to hide anything useful on the end of the data, other than a CRC of some sort.
Also, I don't know how well scaling bad pixels will be as they will vary across all sensors, and this requires math functions for each pixel that can be mapped as bad. Best case is to average their value if they are found to be very different in value to their neighbors. About a difference of 10 should be sufficient. Scan each pixel, determine if it's bad, replace it with an average. This counts all pixels, even the patent pixels.
I'm curious what Seeks solution is to the gradient issue. They claim to be making a solution in software with no direction on how to characterize indivisual lens issues. I surely hope they don't create a generic gradient map and add it it to the calibration frame. That's like putting a bandage on your arm when your leg is bleeding. So they want an automatic calibration event for the majority they feel are wearing the dunce cap...that's fine, but they need ensure the calibration event has the user placing a flat object to the lens. The problem is, it's a thermal issue for many and a one time calibration will only be good for a short period before the gradient creeps back into the image. It won't be as bad as some subtraction is being done, but the gradient will be present. I've noticed posts on the Facebook page and youtube videos complaining about it. Now its an issue and it does affect image quality. Several palettes can't be used because the gradient kills the contrast.
-
#696 Reply
Posted by
bktemp
on 05 Nov, 2014 14:50
-
In the router_image1c.fit and router_image2i.fit files the last value (208th) is always 32768 (1000 0000 0000 0000).
The 207th values are just a little apart in this two files:
I think at some point during conversion of the data 32768 has been added. In the images recorded by marshallh (
https://www.eevblog.com/forum/testgear/yet-another-cheap-thermal-imager-incoming/msg533801/#msg533801) the 208th value is 0 and the 207th is around 5200. There is a clear tendency over all frames from 5250 in the first to 5211 in the last. There is no difference between the reference or any other frame. Maybe it is the sensor temperature.
The first few 207th values of the first line:
5250,5251,5247,5247,5249,5247,5246,5241,5246,5244,5244,5243,5240,5242,5242,5241,5240,5238,5240,5239,5238,5237,5237,5240,5236,5233...
-
#697 Reply
Posted by
frenky
on 05 Nov, 2014 15:00
-
So this numbers are from sequential frames?
It's interesting that values are going down with time. But it makes no sense to me, to have different sensor temp in each line of the file...
And the range of values in 207th column is from 4733 to 5591 (858 values).
-
-
I know this might be a shot in then dark, but perhaps the patent pixels are working thermistors...which means maybe they are reading the temperature of the sensor. Maybe each line is an average of the readings from those pixels? Declining values would point me to think that the resistance is falling (heating) and thus the values are mapped.
-
#699 Reply
Posted by
bktemp
on 05 Nov, 2014 15:15
-
Yes the data are from sequential frames.
I compared them with the avarage of the other pixels of each line, but it does not match.
The min and max values of the 207th value are clearly decreasing over time:
4607, 5364
4612, 5365
4609, 5366
4612, 5362
4607, 5362
4604, 5358
4605, 5357
4607, 5359
4606, 5358
4605, 5359
4602, 5356
4605, 5359
4601, 5354
4602, 5354
4606, 5351
4602, 5353
4600, 5352
4601, 5350
4600, 5352
4598, 5350
4599, 5350
4598, 5351
4598, 5350
4599, 5345
Maybe they have added a black pixel at the end of each line.