... I started this thread at the request of @ultrapurple who has not shown up.-
Yes, sorry I haven't been around - I have been fully committed to family activities for the last several days.
I am finding this discussion fascinating and can see how the sparse sampling techniques could make a major impact on thermal (or other intrinsically low resolution) imaging. One of my first thoughts was whether the regular array of pixels in a thermal camera could be treated as a special case of randomness and built upon from there. Similarly, using a 'randomly' coded aperture might be possible by selecting a random number of pixels from the array. (An interesting side-effect of this would be that dead pixels essentially cease to be a problem; whereas currently a sensor with 1% dead pixels is pretty much a reject, I can imagine sensors with
50% dead pixels being perfectly usable. Manufacturing yield goes way up, prices come way down).
The other exciting thing for me is that it appears that the basic principles have been shown to work, so it's 'just' a matter of turning the maths into applied software and cobbling together suitable hardware to run it on. So what if it takes a rack full of today's CPUs to run in real time? It won't be long before someone obeys Moore's Law and shrinks it into a single package. (At the start of what I laughably refer to as my 'career' I remember being really impressed with some real-time image processing avionics running on custom bit-slice processors encased in several boxes each the size and weight of two dozen bottles of wine. Nowadays the same image processing task would be considered trivial on a mid-range mobile phone - and, to my chagrin, I recently saw key parts of the system on eBay for peanuts). But I digress.
The maths of all this is well beyond me but I am keeping up with the principles well enough to see at least some of the potential. I will continue to watch and learn with much interest.