Author Topic: Decapping and Chip-Documentation - Howto  (Read 39063 times)

0 Members and 1 Guest are viewing this topic.

Offline magic

  • Super Contributor
  • ***
  • Posts: 6761
  • Country: pl
Re: Decapping and Chip-Documentation - Howto
« Reply #125 on: July 26, 2021, 09:04:57 pm »
Yes, it was the same optics, the same subject distance and magnification. I just tilted the die a bit and adjusted the light.
It´s annoying not to know what is happening but I´m happy to have a second possibility to get nice pictures.
Well, assuming that focus wasn't screwed up on the flat version... ;)
I still think that shadows are responsible for more perceived sharpness.
I suspect that flare is responsible for the overall gray appearance and low contrast.

What's the size of this die and how far was it from diffraction limits? That grain on metal traces has single digit microns size, you may start to see diffraction effects at that level with some lenses at some magnification.

My old pictures of fake opamps are a perfect example of the contrast problem. They were shot using exactly your method but with even lower end gear.
I just looked at them again, they really are embarassingly ugly |O :-DD

...better than some fake sharpness and contrast.  ;)
Wasn't it you who encouraged me to do more postprocessing just a few days ago? >:D
« Last Edit: July 26, 2021, 09:08:57 pm by magic »
 
The following users thanked this post: Noopy

Offline magic

  • Super Contributor
  • ***
  • Posts: 6761
  • Country: pl
Re: Decapping and Chip-Documentation - Howto
« Reply #126 on: July 26, 2021, 09:15:40 pm »
Folks strive to keep the Effective Aperture, which is Lens Aperture*(1+Magnification), as low as possible to keep diffusion at bay, even at F10 diffusion starts to degrade things.
Diffraction ;)

That formula looks to me like it is supposed to ultimately determine the diameter of diffraction blur observed on the sensor.
Shouldn't those folks better be concerned with, ahem, Input Referred Effective Aperture, that is EA/mag?
« Last Edit: July 26, 2021, 09:17:11 pm by magic »
 

Online mawyatt

  • Super Contributor
  • ***
  • Posts: 3244
  • Country: us
Re: Decapping and Chip-Documentation - Howto
« Reply #127 on: July 27, 2021, 01:15:32 am »
Nikon DSLRs reports the EA with their lenses, most others do not. Generally this isn't big deal since what's the magnification of a portrait shot at 3 meters, or a landscape, something less than 0.1, so the lens aperture and EA are about the same. The Canon MPE for example reports the lens aperture but is used for close up macro work. Had a long discussion about this some time ago on DPR where they were using the MPE at 3~5X with a lens setting of F11 to get more DoF. The EA was actually 44~66 well into the image quality robbing diffraction territory, not sure they ever got this tho :o

The highly regarded Mitutoyo inf 5X and 10X objectives have an EA of ~18, so I've always considered this the transition region where diffraction begins to effect the image, others work around F9~13 and the image purists well below F9.

I confess I do have a purists type lens, the Printing Nikkor PN105 F2.8A. This was a reproduction lens from movie film days used to produce replicas of movie theater film, so it's deadly sharp from corner to corner and highly optimized for 1X, so the EA is 5.6. Was it expensive when I got it 5 years ago, you bet! However, it's worth twice that today, so exactly how expensive its that!! Same goes for the Red Porsche 911, 25 years old and worth twice what I paid for it new in 1996, and it's still appreciating!! Expensive yes, but also a good investment for a car ;)

Anyway, EA is one of the many parameters that effect IQ.

Another benefit of image stacking I forgot to mention, many lenses get soft around the edges, some are sharp but at a different focus distances than the center. With these type lenses stacking can help with the edge sharpness and make the final rendering sharp across the entire frame, but at the expense of taking more time to capture the stacks, so one trades time for sharpness so to speak :)

Best,
Curiosity killed the cat, also depleted my wallet!
~Wyatt Labs by Mike~
 
The following users thanked this post: Noopy

Offline NoopyTopic starter

  • Super Contributor
  • ***
  • Posts: 1730
  • Country: de
    • Richis-Lab
Re: Decapping and Chip-Documentation - Howto
« Reply #128 on: July 27, 2021, 04:31:15 am »
Well, assuming that focus wasn't screwed up on the flat version... ;)

It wasn´t!  8) I have seen that quality difference in many pictures. Tilted was always better.


I suspect that flare is responsible for the overall gray appearance and low contrast.

Yes, perhaps my "against the lens light" deteriorates the quality.
I have bought a teleprompter mirror and will do some experiments with different light sources.


What's the size of this die and how far was it from diffraction limits? That grain on metal traces has single digit microns size, you may start to see diffraction effects at that level with some lenses at some magnification.

The die was 1,2mm x 1,2mm. The magnification should be around 19x. Diffraction limit... ...yes, seems to be the end of the line...  :-// ::)


...better than some fake sharpness and contrast.  ;)
Wasn't it you who encouraged me to do more postprocessing just a few days ago? >:D

Often that´s a good thing but getting more contrast out of the die is always better.  ;)



Offline magic

  • Super Contributor
  • ***
  • Posts: 6761
  • Country: pl
Re: Decapping and Chip-Documentation - Howto
« Reply #129 on: July 27, 2021, 08:53:39 am »
The relevance of EA at high magnification is that it tells you how much shutter speed you need for given light, which could be of interest to film photographers I suppose :P

If you want to know how it affects resolution, you go to the oracle (or do the math by hand ::)). You tell the oracle your camera model, or crop factor and megapixel count. You set "lens aperture" to the calculated EA number. The oracle shows you how aperture affects your pixels. You remember that only half of your pixels are green and only a quarter is red and that there is an anti-alias filter too. (The oracle helpfully reminds you not to worry if blur is about 2px because at that level you are screwed by your sensor anyway.) You also consider how much you intend to scale the image down for publication.

Without crop factor and without megapixel count of the final, published image those EA numbers are meaningless. It is not surprising to find an order of magnitude spread between different people, gear and purposes.

EA is very convenient in ordinary photography, because it is equal to the F-number at infinity focus, 10% worse at 0.1 magnification and 25% worse at 0.25 magnification. You do the blur calculation once for any combination of camera body and F-number and you are set for life, as long as you stay away from macro.

For the same reason, it's an ass-backwards way to deal with high-mag. Since the dawn of ages and long before digital imaging, microscopists talked about object-side aperture. You can think about it in two ways. One is to calculate "input referred" blur, which is the sensor side blur divided by magnification. It tells you how the blur compares to magnified features on the object. The object-side aperture formula is: F·(1+M)/M. Or you can imagine that your sensor is the object and your object is the sensor (easy to imagine if the lens is reverse-mounted) and trace the rays backwards and then calculate EA that way, assuming magnification 1/M. The object-side aperture formula is: F·(1+1/M). Observe that the two formulas really are the same and that magnification is almost irrelevant as long as it is far from unity, just like in ordinary photography.

So you go to the oracle again. You tell the oracle your object-side aperture. The oracle shows you how many microns of blur you get on your object. If your object is 1200µ in size and the calculated blur is 1.2µ then you know it's 0.1% of your object. You don't care about the sensor at all, unless pixels are too coarse to resolve features that aren't blurred by the optics - then it's time to increase magnification or get a better camera.

Only at close to unity magnification both the macroscopic and the microscopic approximations are equally wrong by a factor of 2x. Then you are screwed and you have to do the math precisely - such is the sad life of bug shooters and that's why they worry about EA.

Often that´s a good thing but getting more contrast out of the die is always better.  ;)
I sometime use ISO10 trick. I take 10 identical shots at ISO100 and average them with GraphicsMagick. ISO noise is greatly reduced and more contrast enhancement can be applied. Lens flare remains ::)

edit
When stacking lenses, diffraction surely occurs in both lenses so in theory you should do both calculations and combine the results. It could be interesting to see what comes out and whether the telephoto lens turns out to be insignificant in practice or whether (maybe) some of the combinations that people recommend aren't really as good as they think because of the aperture of the tele lens. Again, all of that is messy calculations involving pixel pitch and sensor size. Perhaps somebody has already figured it out.

It is generally recommended to use the tele wide open. In my experience with imaging the Chinese opamps (reversed webcam lens stacked onto a point and shoot) stopping down the camera's lens made no improvement and caused bad chromatic aberration :-//

edit edit
Actually, it's trivial. The tele lens is operating normally and focused at infinity. So as long as it isn't some diffraction-bottlenecked megazoom piece of junk, it should be good.
« Last Edit: July 27, 2021, 01:33:56 pm by magic »
 
The following users thanked this post: Noopy

Online mawyatt

  • Super Contributor
  • ***
  • Posts: 3244
  • Country: us
Re: Decapping and Chip-Documentation - Howto
« Reply #130 on: July 27, 2021, 01:35:41 pm »
The relevance of EA at high magnification is that it tells you how much shutter speed you need for given light, which could be of interest to film photographers I suppose :P


Also is a good indication on diffraction effects and why many use this, myself included.
Quote
If you want to know how it affects resolution, you go to the oracle (or do the math by hand ::)). You tell the oracle your camera model, or crop factor and megapixel count. You set "lens aperture" to the calculated EA number. The oracle shows you how it affects your pixels. You remember that only half of your pixels are green and only a quarter is red and that there is an anti-alias filter too. (The oracle helpfully reminds you not to worry if blur is about 2px because at that level you are screwed by your sensor anyway.) You also consider how much you intend to scale the image down for publication.

Agree, I was enlightened decades ago by Cambridge, it' a good resource, and confirms my experience with various lenses, diffraction and viewing details.

Quote
I sometime use ISO10 trick. I take 10 identical shots at ISO100 and average them with GraphicsMagick. ISO noise is greatly reduced and more contrast enhancement can be applied. Lens flare remains ::)

ISO is a carry over from film days, and in digital cameras generally represents the amplifier gain preceding the ADC. The "ISO10 trick", isn't a "trick" at all, this is simple elementary signal processing 101, used everywhere signals are present and processed. This teaches the signal to noise ratio (SNR) improves as the square root of averages for uncorrelated signals (noise) and correlated signals (image), and since the flare is correlated from image to image just like the desired image details, the flare isn't attenuated, however the uncorrelated noise does attenuate. The net result is an improvement of SNR by square root of N, where N is the number of images, and under the conditions stated about correlated and uncorrelated "signals".

So this "trick" does not produce an ISO 10 for ten ISO 100 image averages, but because of the square root relationship more resembles an ISO 32 effect.

Edit, However this averaging does improve the final rendering and also contributes to why focus stacking shows improved image quality, since stacking is similar in some respects to just signal averaging. The main difference being stacking tends to ignore areas of image blurriness after image alignment, whereas simple averaging does not align or ignore image areas.

Anyway, hope this helps.

Best,
« Last Edit: July 27, 2021, 01:47:16 pm by mawyatt »
Curiosity killed the cat, also depleted my wallet!
~Wyatt Labs by Mike~
 

Online mawyatt

  • Super Contributor
  • ***
  • Posts: 3244
  • Country: us
Re: Decapping and Chip-Documentation - Howto
« Reply #131 on: July 27, 2021, 02:30:18 pm »
Just for clarification we began doing chip images before the turn of the century. At that time almost all images were from thru the microscope, however we decided to venture "outside" the microscope and employ different techniques in the quest for better chip images. These images were created to support our chip design efforts and presentations, and almost all images were of chips we designed.

Over the years the techniques and equipment evolved to what we have now, so lots of experimenting with what works and what doesn't over those years. Our work related to chip imaging techniques and equipment hasn't progressed in the past 3 years since the development of the piezo stages for sub-micron work, and previous fully Automated Stack & Stitch system, and since we haven't designed a new chip since retiring. However, this may change soon with another SOTA chip develoement, if things go as planned and we'll be back to doing some high resolution Stack & Stitch imaging of a very large and complex new type chip. Time will tell.

Best,
Curiosity killed the cat, also depleted my wallet!
~Wyatt Labs by Mike~
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6761
  • Country: pl
Re: Decapping and Chip-Documentation - Howto
« Reply #132 on: July 27, 2021, 09:00:58 pm »
The die was 1,2mm x 1,2mm. The magnification should be around 19x. Diffraction limit... ...yes, seems to be the end of the line...  :-// ::)
Yes, I think you may be seeing the limits of f/4 aperture.

I took my own f/2 image of the Chinese "659" type RC4558 and scaled it to the same size as yours here.
Die width is 1150px and about 700~750µ in the real world per my measurement. Scale is 1.6px/µ.
My sources give airy disk diameter at f/4 as 5µ, which is 8 pixels on our images.
I applied radius 4 blur to half of the image and then some sharpening to approximate what a camera or raw processor could do.
The result looks similar.

We may also take a close look at the f/2 original (~2px/µ scale).
Apparently, lines about 2µ apart are resolved, but those closer to 1.5µ apart (and less) are not.
Given that expected blur diameter is 2.5µ or 5px, in-camera processing has clearly worked very hard on this image ;)
« Last Edit: July 27, 2021, 09:34:32 pm by magic »
 

Offline NoopyTopic starter

  • Super Contributor
  • ***
  • Posts: 1730
  • Country: de
    • Richis-Lab
Re: Decapping and Chip-Documentation - Howto
« Reply #133 on: July 28, 2021, 04:53:13 am »
Thanks for all the input!  :-+

I definitely will try some stacked lenses.
I have interesting couples in stock:
- 100-400mm 4,5-5,6 AND 10-22mm 3,5-4,5 => huge magnification but due to diffraction probably not ideal
- 100mm 2,8 AND 10-22mm 3,5-4,5 => better quality than today?
- 100-400mm 4,5-5,6 OR 100mm 2,8 AND 35mm 1,8 => I wasn´t happy with the 35mm 1,8. Perhaps stacked it gets better.
- I have a nice 24-70mm 2,8 perhaps I will try that too. Probably as second lens because 70mm seems to be a little short for the first lens.
- Would it make any sense to stack the MP-E 65mm 1-5x? No, probably not...

Online mawyatt

  • Super Contributor
  • ***
  • Posts: 3244
  • Country: us
Re: Decapping and Chip-Documentation - Howto
« Reply #134 on: July 28, 2021, 02:17:02 pm »
magic,

We're on travel now and didn't have time to respond to your post regarding exposure time, sorry. Just noted that the post has been removed?? Maybe I still have it on my home Mac since EEVblog hasn't been updated on that computer for the past couple days.

Anyway have an idea why it was removed, since it contained some mis-information regarding exposure. Exposure time for optical sensors is integration time and the sensor converts incident photons to free electrons based upon the conversion efficiency. This creates a system where the signal (image) increases linearly with exposure with some assumptions),  the general noise (shot noise) increases with square root of exposure, so the net effect is as mentioned an improvement in SNR (IQ) as square root of N where N is exposure ratio. No different the standard procedure for other systems and followed elementary signal processing.

However, you can't just expose forever since "other" undesirable sources contrive to corrupt your image, dark current and so on, so no free lunch!! This is why long exposure sensors are generally cooled, to reduce additional long exposure noise for one source causing long exposure image degradation.

Anyway, now you know this about the noise effects on optical sensors, and thanks for removing the post with the mis-information regarding exposure as it might be confusing. Sorry for the late reply.

Hope this helps.

Best,
Curiosity killed the cat, also depleted my wallet!
~Wyatt Labs by Mike~
 

Offline NoopyTopic starter

  • Super Contributor
  • ***
  • Posts: 1730
  • Country: de
    • Richis-Lab
Re: Decapping and Chip-Documentation - Howto
« Reply #135 on: July 28, 2021, 02:30:54 pm »
However, you can't just expose forever since "other" undesirable sources contrive to corrupt your image, dark current and so on, so no free lunch!! This is why long exposure sensors are generally cooled, to reduce additional long exposure noise for one source causing long exposure image degradation.

OT:
I have done some pretty interesting shots with my camera in the fridge: Longtime exposure at 20°C vs. 4°C and such things.  :-+

Online mawyatt

  • Super Contributor
  • ***
  • Posts: 3244
  • Country: us
Re: Decapping and Chip-Documentation - Howto
« Reply #136 on: July 28, 2021, 02:51:46 pm »
Clever, hopefully you were able to keep the lens and sensor from fogging over👍

Best,
Curiosity killed the cat, also depleted my wallet!
~Wyatt Labs by Mike~
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6761
  • Country: pl
Re: Decapping and Chip-Documentation - Howto
« Reply #137 on: July 28, 2021, 03:02:14 pm »
I have deleted one of my posts which focused mainly on the topic of image- vs object-side aperture metrics but it ended up being poorly written and I figured probably no one cares anyway ::)

The ISO comment was there.

Exposure time for optical sensors is integration time and the sensor converts incident photons to free electrons based upon the conversion efficiency. This creates a system where the signal (image) increases linearly with exposure with some assumptions),  the general noise (shot noise) increases with square root of exposure, so the net effect is as mentioned an improvement in SNR (IQ) as square root of N where N is exposure ratio. No different the standard procedure for other systems and followed elementary signal processing.
No disagreement.

However, you can't just expose forever since "other" undesirable sources contrive to corrupt your image, dark current and so on, so no free lunch!! This is why long exposure sensors are generally cooled, to reduce additional long exposure noise for one source causing long exposure image degradation.
Most importantly, they seem to have a physical cap on how much charge they can capture and that may be not far above the exposure required for full scale response at native ISO. As I found out by overexposing a CCD sensor by a stop and trying to read it out at half the recommended ADC gain :wtf:

thanks for removing the post with the mis-information regarding exposure
I still don't see what was supposed to be wrong with my comments regarding ISO. I claimed and I still maintain that averaging 10 exposures at ISO100 captures the same light as one exposure at ISO10 (which may not be supported by the sensor) and hence is expected to yield the same noise reduction as 1 exposure at ISO10, which is a ~3 times reduction indeed.

You seem to assume that 3 times reduction in noise wrt. ISO100 is achieved at ISO32; I find this idea highly dubious.
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6186
  • Country: ro
Re: Decapping and Chip-Documentation - Howto
« Reply #138 on: July 28, 2021, 03:29:27 pm »
OT:
I have done some pretty interesting shots with my camera in the fridge: Longtime exposure at 20°C vs. 4°C and such things.  :-+

That's interesting you could do that.

Here's an idea   ;D :   Would it make sense to have a freezer with a transparent window/lens monture in it, and a photo-camera kept inside the freezer at all times (at -30...-20*C), for less noise?

Instead of freezing the batteries, the camera can be powered by wires and operated by USB, externally.  The hole window/lens monture can be reached from outside, but has a transparent separator, so to be possible to change objectives without fogging the interior of the camera.

I know astronomers are using commercial CCD sensors from former consumer DSLR, CCD sensors that are cooled with Peltier elements and read with custom electronics.  Poking a hole in a freezer should be easier when compared to that, and the camera can stay untouched.  Old models of DSLR body only are very cheap considering their sensor's quality, should be just right for such a freezer setup.   ;D

Offline NoopyTopic starter

  • Super Contributor
  • ***
  • Posts: 1730
  • Country: de
    • Richis-Lab
Re: Decapping and Chip-Documentation - Howto
« Reply #139 on: July 28, 2021, 06:05:01 pm »
Clever, hopefully you were able to keep the lens and sensor from fogging over👍

I left the camera in the fridge while taking the pictures. Additionally I wrapped the camera in a towel so the cold parts don´t see too much humidity while the door is open. With the towel it takes longer to cool the sensor down after a picture but the fridge is cooling 24/7.  ;D
If you are interested you can download the pictures here:
http://www.richis-lab.de/temp/noise.zip
Names are self-explaining.
Interesting how different the picture size is depending on the noise.
The real noise comes with 300s @ISO1600.


Here's an idea   ;D :   Would it make sense to have a freezer with a transparent window/lens monture in it, and a photo-camera kept inside the freezer at all times (at -30...-20*C), for less noise?

At temperatures below -10°C I would fear that some circuits stop working as designed (capacitors, mechanics,...).
The problem (water) is the junction between cold and room temperature. As far as I know the "professional sensor cooler" heat this junction somehow.


BUT:
The noise is not really a problem. My pictures are taken with 1/100s - 1s @ ISO100 - ISO1000.
Sometimes you see some colored pixels in dark pictures. That is not really noise. For example I often have a lonely red pixel in the lower right corner. I have bonded with this pixel.  ;D It would be no bigger problem to subtract these pixel. My camera can take a dark picture and do the subtraction automatically. But that takes twice the time.  :-/O
« Last Edit: July 28, 2021, 07:18:36 pm by Noopy »
 

Online mawyatt

  • Super Contributor
  • ***
  • Posts: 3244
  • Country: us
Re: Decapping and Chip-Documentation - Howto
« Reply #140 on: July 28, 2021, 07:09:12 pm »
Interesting stuff!! Many of the older asto imagers from Santa Barba Instrument Group were based on Kodac chips that were cooled with TE coolers. Those chips were fabricated by IBM, not Kodac tho.

Best
Curiosity killed the cat, also depleted my wallet!
~Wyatt Labs by Mike~
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6186
  • Country: ro
Re: Decapping and Chip-Documentation - Howto
« Reply #141 on: July 28, 2021, 07:34:14 pm »
I think there are more choices for the sensors now, the one I found out about recently (by serendipity) is ICX453AQ, apparently the sensor from Nikon D70 DSLR.  The sensor can be desoldered from second hand cameras, or bought as a spare part for about $20-50.

There is also a small kit board to read the sensor:  http://astroccd.org/2016/10/cam86/

Online mawyatt

  • Super Contributor
  • ***
  • Posts: 3244
  • Country: us
Re: Decapping and Chip-Documentation - Howto
« Reply #142 on: July 28, 2021, 08:06:12 pm »
Had a D70 long ago, superb DSLR. Took a number of years before CMOS could approach CCD noise performance. Don’t think CMOS ever surpassed CCDs in noise performance tho.

Best,
Curiosity killed the cat, also depleted my wallet!
~Wyatt Labs by Mike~
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6761
  • Country: pl
Re: Decapping and Chip-Documentation - Howto
« Reply #143 on: July 28, 2021, 09:44:57 pm »
Weekly status update from project "Beyond Matchbox" :D

Attached picture was produced using 100% webcam components plus the usual smartphone mirror foil and stuff to hold it all together. Scale is 2px/µ and resolution of some 500lp/mm (or one "meaningful" pixel per micron) appears to be achieved, although there isn't really a regular pattern anywhere on the die that would enable confirming it with certainty. That's close to the customary theoretical maximum, which is 700~1200lp/mm (depending on color) for a lens of typical for webcams f/2 aperture. Producing a usable image at the absolute limits would require heroic postprocessing effort anyway; this one is only sharpened a bit.

The lens is the same one which was used last year, but optical performance is somewhat better than the sample I posted yesterday. This is because the old system involved lens stacking on a compact camera, which contributed a bit of diffraction in its own lens and had relatively low magnification, barely sufficient for the pixel pitch of the sensor. With only the webcam lens, magnification is easily increased and thanks to Hugin I don't need to worry about field of view either.

The die is not very pretty because it came from my pile of ICs that didn't make it to the final cast of the "fake opamps" post last year. It shows signs of slight overheating, which is changed color in some places and a lot of pale areas on the metal. However, the tortoise pattern which looks like it could be surface cracking actually is not. It's just residue left by dried acetone. I have serious problems with getting those things clean: washing them leaves patterns like that, wiping them leaves dust |O Not sure how to deal with it, maybe some photographic sensor cleaning kits or lens paper? Such stuff is supposed to be dust free, right?

Unfortunately, M42 rings are barely suitable for setting focus with the webcam lens. Their threads are somewhat loose and they wobble. I had less problem using the 10x objective with DCR-250, not sure why. Not sure what to replace them with, they really are quite convenient in every other regard.

Sometimes you see some colored pixels in dark pictures. That is not really noise. For example I often have a lonely red pixel in the lower right corner. I have bonded with this pixel.  ;D It would be no bigger problem to subtract these pixel. My camera can take a dark picture and do the subtraction automatically. But that takes twice the time.  :-/O
Some cameras support permanent bad pixel correction, which takes a dark shot and saves the list of stuck pixels for automatic removal from future photos.

BTW, maybe you should sell that "Canon" stuff and mount a webcam lens on the DSLR? I'm not sure if that has been tried yet :-DD
« Last Edit: July 28, 2021, 09:48:14 pm by magic »
 

Offline NoopyTopic starter

  • Super Contributor
  • ***
  • Posts: 1730
  • Country: de
    • Richis-Lab
Re: Decapping and Chip-Documentation - Howto
« Reply #144 on: July 29, 2021, 03:17:23 am »
Weekly status update from project "Beyond Matchbox" :D

It´s amazing what you are able to achieve with your webcam parts!  :-+


I have serious problems with getting those things clean: washing them leaves patterns like that, wiping them leaves dust |O Not sure how to deal with it, maybe some photographic sensor cleaning kits or lens paper? Such stuff is supposed to be dust free, right?

I clean loose dirt with a normal kleenex and a good amount of IPA. If there are some fibers left I use canned compressed air to blow them away. Works fine for me.


Sometimes you see some colored pixels in dark pictures. That is not really noise. For example I often have a lonely red pixel in the lower right corner. I have bonded with this pixel.  ;D It would be no bigger problem to subtract these pixel. My camera can take a dark picture and do the subtraction automatically. But that takes twice the time.  :-/O
Some cameras support permanent bad pixel correction, which takes a dark shot and saves the list of stuck pixels for automatic removal from future photos.

Yes, should be possible too.


BTW, maybe you should sell that "Canon" stuff and mount a webcam lens on the DSLR? I'm not sure if that has been tried yet :-DD

 ;D
 
The following users thanked this post: magic

Offline magic

  • Super Contributor
  • ***
  • Posts: 6761
  • Country: pl
Re: Decapping and Chip-Documentation - Howto
« Reply #145 on: July 29, 2021, 06:56:56 am »
People have been using webcam, smartphone and similar lenses for macro photography for many years. I think they work well because in their normal use they need to support very high pixel densities on the sensor side. The webcam I converted has ~3µ pixels and low resolution (1280x1024) so if the lens limited it further, the problems would be quite visible and one couldn't hide them by scaling down since even full resolution doesn't really fill a modern screen.

Or maybe I just got lucky with mine ;D If anyone wants to know, it's Esperanza EC105, which appears to be a Polish company that sticks their logo on random Chinese stuff, so it may be available under different names elsewhere. I think any of those big, "cannon style" webcams with similar lens and HD or better resolution is likely to work.

The sensors on those things aren't great, though. One reason I picked the M42 system was to have a path for potential upgrade to better cameras, up to a MILC.

Another missing piece is a system to automatically move the die under the camera, so that even large dice could be "scanned" with little effort. Hugin is working reliably so far, so stitching a 1000 frame mossaic would hopefully only be a matter of waiting an hour for the result.
 
The following users thanked this post: Noopy

Offline NoopyTopic starter

  • Super Contributor
  • ***
  • Posts: 1730
  • Country: de
    • Richis-Lab
Re: Decapping and Chip-Documentation - Howto
« Reply #146 on: August 05, 2021, 04:51:53 am »
- 100-400mm 4,5-5,6 AND 10-22mm 3,5-4,5 => huge magnification but due to diffraction probably not ideal
- 100mm 2,8 AND 10-22mm 3,5-4,5 => better quality than today?
- 100-400mm 4,5-5,6 OR 100mm 2,8 AND 35mm 1,8 => I wasn´t happy with the 35mm 1,8. Perhaps stacked it gets better.
- I have a nice 24-70mm 2,8 perhaps I will try that too. Probably as second lens because 70mm seems to be a little short for the first lens.
- Would it make any sense to stack the MP-E 65mm 1-5x? No, probably not...

I think I was quite lucky with my 10-22mm retro. I didn't get much better picture quality with the other combinations.

100mm stacked with 10-22mm was slightly better than 10-22mm retro without the 100mm but not very much. Besides that this combination gives me just a magnification of 10x and it is less handy.  :-\

The stack 400mm - 24mm was huuuge.  :o It looked very funny and expensive. Sorry, no picture. Imagine a "Canon 100-400mm 4.5-5.6 l" zoomed to the maximum stacked with a "Canon 24-70 2.8 I" zoomed to the maximum (invers zoom).  ;D
A magnification of 17x seemed to be good but the image quality was bad. :'(
« Last Edit: August 05, 2021, 05:02:43 am by Noopy »
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6761
  • Country: pl
Re: Decapping and Chip-Documentation - Howto
« Reply #147 on: August 05, 2021, 04:13:42 pm »
10x magnification doesn't seem too bad. It's ~6px per minimum resolvable point distance at 550nm if I'm not mistaken*, probably near the reasonable minimum for comfortably avoiding sensor bottleneck.

The f/1.8 would be nice but even more magnification is needed to really take advantage of it on APS-C. It's not gonna be easy with f=35mm. That's a setup that would look PRO :wtf:

*calculations specific to EOS 60D with f/4 lens, of course
« Last Edit: August 05, 2021, 04:27:23 pm by magic »
 

Offline NoopyTopic starter

  • Super Contributor
  • ***
  • Posts: 1730
  • Country: de
    • Richis-Lab
Re: Decapping and Chip-Documentation - Howto
« Reply #148 on: August 06, 2021, 03:26:53 am »
10x magnification is quite ok but I personally need more!  :-/O ;D

Offline magic

  • Super Contributor
  • ***
  • Posts: 6761
  • Country: pl
Re: Decapping and Chip-Documentation - Howto
« Reply #149 on: August 06, 2021, 06:38:14 am »
More magnification or more resolution? :box:

Here's an interesting discovery: if 10x is needed to easily recover full resolution of f/4 on a typical APS-C sensor, then a typical 10x0.25 microscope objective (f/2 equivalent) may slightly outresolve such sensor. Same with 5x0.12, 20x0.4 and so on.

I'm sure it makes bug shooters happy, but it seems not ideal for high magnification closeups and getting the most fine detail out of the optics. Testing would be needed to establish the exact limits, including Bayer and anti-alias effects. I suppose blue is the worst case, assuming that you want to take advantage of the short wavelength and capture more detail in blue than in green/red. Red is also affected by 25% pixel density but at least it's blurred more so there is less detail to recover. Some analysis of the problem has been made at PM and linked by mawyatt last year, but it seems mostly theoretical.

Full frame is even worse. You have to buy the highest end to get a mere 45Mpx and the same 4.3µ pixel size.

Nikon 1? :-DD

edit
Alternative kludges are possible. One could try an infinity objective with +2D tube lens on 50cm extension (or 500mm telephoto) to double the magnification :wtf:

These guys don't give a damn and just mount infinity objectives on a super long extension tube and focus them at a finite (but long) distance. However, I am not entirely sure if they know what they are doing and whether their techniques are optimal. The other day, they also identified a bipolar IC which could be some LM386 as a MEMS strain gauge ::)
« Last Edit: August 06, 2021, 06:59:02 am by magic »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf