Pretty much as lukier said - people don't use CUDA because they love it so much or Nvidia is paying them but because it is the only viable game in town.
If you don't like it, feel free to go old school and rewrite the calculations using e.g. compute shaders. That works too and the performance should be similar to CUDA (it is basically the same thing) but you will quickly wish you had that CUDA to do the boilerplate and housekeeping necessary for you.
SLAM, photogrammetry, deep learning - none of these can be done realistically for actually useful sized tasks unless you are willing to spend money, whether on hardware or something like Google or Amazon cloud compute services. CPU-only variants are good for toy-sized problems only, it would take ages to process any real world dataset like that.
So if you can't afford a decent GPU, don't waste your money buying an underpowered one. You will be fighting an uphill battle and will give up in disgust sooner than later. Many of these things need iterations where you tweak some parameters, run the code for a few hours, then tweak some more, run it again, until you get something acceptable. If those few hours per round turn into days you will quickly give up.
And regardless of that - using that measuring tape is still going to be faster than any of this.
E.g. just today I was playing at work with a deep learning based animation system presented at this year's SIGGRAPH. It can generate data for steering a physically controlled (PID and such) simulated robot by learning from an example animation, such as a motion captured keyframe animation performed by a human actor. Complex problem but once trained it runs in real time. However, the training on a single beefy computer with 16 threads in parallel using a single GPU for one animation takes over a day. Without GPU it is completely nonviable, it would take weeks.