It doesn't matter how many Xeons you put in a workstation: if it doesn't have additional GPU processing power, it will never be a match to an old Core i5/i7 with hundreds of CUDA cores.
I don't agree at all. Firstly Lightroom etc can only use a single graphics card, having 2 in SLI is of no use to it, and its benefit is only relative from my experience... On my setup with 2 GTX980s disabling graphics acceleration actually significantly improves performance in LR especially on the infamous local adjustment brush. Maybe if you've got an average CPU the graphics card can do good, but no graphics card does as well as a good CPU (in that app and at this time).
I work with 42MP RAWs and frankly the performance for the image processing side of it is quite OK considering the amount of data, that's not what I'm complaining about - what's unacceptably slow is the UI response. There is just no reason it should take a second or 2 everytime to change view, switch folder etc or even go from an image to the next and have all UI elements load one after the other like a webpage in the 90s. The UI is really a dog especially on Windows.
Also batch exports should parallel render one image per available core (easy since the processes are completely independent) instead of trying to badly use multiple cores to render each image one after the other.
System details:
ASUS RAMPAGE V
Core i7-5960X OC @ 4.1GHz
32GB quad-channel DDR4
System/work drive 2x intel 256GB SSDs in RAID0
2xGTX980
W10
More interestingly on my laptop (Macbook Pro) with a 2.8GHz quad-core i7 the time needed to render a full-size preview is about twice as long, but the overall experience is better since the UI is much smoother on the mac version.
Oh and the mac also has an nVidia discrete card (GT750M) and LR also does better with it disabled.