You may want to carefully read the very first line of the very first post

Meanwhile I came across this paragraph in nonsensopedia article on von Neumann:
Independently, J. Presper Eckert and John Mauchly, who were developing the ENIAC at the Moore School of Electrical Engineering, at the University of Pennsylvania, wrote about the stored-program concept in December 1943. [8][9] In planning a new machine, EDVAC, Eckert wrote in January 1944 that they would store data and programs in a new addressable memory device, a mercury metal delay line memory. This was the first time the construction of a practical stored-program machine was proposed. At that time, he and Mauchly were not aware of Turing's work.
No source is given as usual, but if true, the famous ENIAC would be another example of a "Turing complete" system designed without Turing.
With Turing:
Minion: "I want to build this machine I'm calling a computer. It'll cost $XYZ."
Boss: "Will it work?"
Minion: "Yes, and I can prove it mathematically."
one year later:
Boss: "What do you mean, additions is O(n²)?!" 
FTFY. The truth is, nobody cared about theoretical "universality". From hand-cranked calculators, through Babbage's Analytical Engine and electro-mechanical cipher-breaking machines of WW2 up to ENIAC, stuff was designed with applications in mind and people weren't stupid, they knew if the application will be possible and how long it will take to execute. And if a new more complex application arrived, the design was expanded. Apparently both Babbage and ENIAC reached the limit of "computability" without even knowing a limit exists. That's how easy it is. Even the page fault handler of x86 and C++ templates are said to be Turing-complete.
At the same time, execution time is a rather important factor that the "foundations of mathemathics" crowd didn't care about at all because building actual machines was not their point, the point was to come up with any non-handwavy definition of "effective method" and that's what they did. Turing may or may not have done some later work on complexity too, I don't know, but that's not what he is famous for and that's not what his "computing machine" paper was about.
The theory of computability is foundational to security, and is widely important in all sorts of heavy compute activities.
That's complexity, not computability.
Every problem that can be solved by brute force is computable and that's another reason why computabilty is mostly irrelevant in practice.
I will tell you who actually cares about that Turing/Church stuff outside of academia. Advanced compiler design, static analysis, solvers to abstract logical problems, tools for abstract verification of digital IC designs, that sort of stuff. And even there, it's a 50:50 mix of undecidable problems and "merely" NP-hard. But of course, as Cerebus says, you won't get far in those fields if you can't recognize the former, so it is important in this context.