I had a bit of a play around with this before using ImageMagick to find the difference between the sample and reference images. This was primarily targetting missing components, not bad soldering, I had good results transforming to HSV colour space, and just examining the saturation channel, though a better approach would be to use some function of all three channels. The saturation channel is colourised to red, and 'eroded', to reduce the noisiness, then blended back into a darkened version of the original.
I didn't get round to building a proper light box and alignment jig, but results were promising just using a desk lamp and a the camera in a smart phone that was resting on a stack of books.
I used the following shell script, if anyone wants to experiment:
#!/bin/sh
REFERENCE="20160118_111042.jpg"
SAMPLE="20160118_111108.jpg"
# Using BMP for these doesn't seem to give the same result, colour depth/gamma issues?
convert $REFERENCE -colorspace HSV -separate /tmp/reference%d.pgm
convert $SAMPLE -colorspace HSV -separate /tmp/sample%d.pgm
cd /tmp
composite reference1.pgm sample1.pgm -compose difference out.pgm
convert out.pgm -morphology Erode Square out2.pgm
convert out2.pgm -channel G -evaluate set 0 -channel B -evaluate set 0 out3.bmp
convert out3.bmp -brightness-contrast 40x50 out3.bmp
convert "$REFERENCE" -modulate 100,30 -brightness-contrast -30 reference-dark.bmp
cd -
composite /tmp/reference-dark.bmp /tmp/out3.bmp -compose plus out4.png
PGM was used for the intermediate files for speed, with the eventual intent that the process could be applied to a live video stream - put the board under the camera, and as soon at is registered correctly, the result would be displayed on the monitor. I think processing time was only a second or so for an image of a few megapixels; no doubt it could be accelerated further if desired.