Products > Programming
Cartesian coordinates warped as a Smith Chart
RoGeorge:
The goal would be to have a small resolution webcam pointing to a piece of paper (640px/30Hz, or 1024px webcam at most), then to use OpenCV to show side by side the normal image and the morphed one. Something like this at minute 6:15 (the video was made by offline processing the input frame by frame, not live morphing)
I've tried morphing a static picture (by running this code in Python notebooks https://github.com/stephencwelch/Imaginary-Numbers-Are-Real). By all the video games and the GHz in a typical desktop/laptop I was expecting that to be very fast, but the Python code was taking about 10 seconds to warp a single frame. I've only modified the mapping function from \[
w = z^2 + 1
\] to \[
w = 4 \cdot \frac{1-z}{1+z}
\] (as in the attached notebook). Why does it take so long to calculate that on a 4GHz desktop?
(the pic of Max the dog is from Gyro: https://www.eevblog.com/forum/chat/post-a-picture-of-your-dog/msg4821569/#msg4821569)
I've used OpenCV once, for trivial processing only, to change the contrast and detect some blinking pixels. Since I'm lousy at programming, and not experienced with OpenCV, neither with video processing, I though I might ask first, before proceeding.
Is live morphing the video from a webcam feasible?
Want only to foul around by drawing pencil lines on a paper, and see that live as circle-arches on the screen, nothing more.
Nominal Animal:
--- Quote from: RoGeorge on May 09, 2023, 11:48:28 am ---Is live morphing the video from a webcam feasible?
--- End quote ---
Just for fun, I checked how long it takes on an Intel Core i5-7200U to morph a 1920x1080 full-color image using an arbitrary mapping, and the answer was about 3 milliseconds. So, real-time webcam morphing with a static mapping is definitely feasible.
I don't know how feasible it is with OpenCV, though.
The age-old mapping technique is to precalculate an array, one element per output pixel, identifying the source pixel (typically as the offset to the source image origin). In C,
uint32_t *map; // Dynamically allocated, map[HEIGHT][WIDTH]
uint32_t *src; // Dynamically allocated, src[HEIGHT][WIDTH]
uint32_t *dst; // Dynamically allocated, dst[HEIGHT][WIDTH]
where map is precalculated for each transform (including scaling) only once, i.e.
map[dx + dy*WIDTH] = sx + sy*WIDTH;
where (dx,dy) is the transformed image point, (sx,sy) is the corresponding source image point, with dx=0..WIDTH-1, dy=0..HEIGHT-1, sx=0..WIDTH-1, and sy=0..HEIGHT-1.
You can reserve an extra pixel in the source and destination arrays, so that index WIDTH*HEIGHT can be used for pixels that cannot be mapped or map to outside the source image.
The actual mapping loop you do for each image frame, assuming 32-bit pixels, is then
size_t i = WIDTH * HEIGHT;
while (i-->0)
dst[i] = src[map[i]];
and as I already said, on old Intel Core i5-7200U this takes about 3 milliseconds with WIDTH=1920, HEIGHT=1080.
DiTBho:
--- Quote from: RoGeorge on May 09, 2023, 11:48:28 am ---the Python code was taking about 10 seconds to warp a single frame...
...Why does it take so long to calculate that on a 4GHz desktop?
--- End quote ---
for the same reason why a dependency resolver algorithm written in Python takes ~2 minutes on an IDT MIPS@400Mhz whereas the C version of the same algorithm takes 40 seconds.
Python is good for prototyping, but it's a slow elephant :o :o :o
CatalinaWOW:
There are at least three elements to the behavior you are seeing.
The first is the language you use to implement the solution. Python is an interpreted language, meaning that each time through all the code loops the computer translates the instructions into the appropriate machine code, and then does the actual work. Compiled languages, including C do the translation to machine code once, and then can process the data with redoing all of that translation. In both cases different implementations of the language may do more efficient jobs on each phase of the operation. Which implementation is faster can actually depend on the type of algorithm being implemented.
The second is how cleverly the algorithm is arranged. Nominal gave an example of this, recognizing that since the transformation doesn't change from frame to frame it can be pre computed. This stage require either much research (or experience) or tremendous creativity.
The third is some kind of hybrid of the first two. Modern personal computers have many computing resources, multiple cores in the processor, graphics processors and multiple speeds and quantities of memory. Partition and allocation of execution of the problem to these resources is complex and usually doesn't happen in the default settings of the various coding systems. In your test case you were probably using a single core in the processor, which was likely also hosting everything else happening in the machine, and probably not using the graphic processor at all. Some compilers can do part or all of this, but I can provide no advice here. Look into multi-threaded operations.
DiTBho:
Labview, Matlab, and Octave are of the same level, and both are good for fast prototyping.
Labview and Matlab have a sort of external compiler suite for DSP-specific stuff.
Navigation
[0] Message Index
[#] Next page
Go to full version