make me wonder why i can still make some useful programs even though i cant prove them formally?
Because you apply rules that either someone else has worked out and taught you, or you yourself have found via practical experimentation. You do not know the domain where those rules are reliable, you just write the program according to the rules you know.
The problem is, you do not know when your program no longer produces useful results. You only know "it works for me". It is usually a surprise to you that the program fails in certain cases. To those that understand the implementation formally, usually immediately grasp where the problem lies when the input and the error or problem is described to them.
(This assumes that by "prove them formally", we mean that you have a complete understanding of the implementation, not that you know how to
convey such understanding in standard form. Being able to convey the understanding, somehow, is important though, because us humans often believe we completely understand something until we actually try to convey that understanding, even if to a rubber duck.)
I self-taught myself to program before I had had any higher math or formal logic classes. I am by nature analytical, so I had a huge relief and excitement when I found the formal math used to describe these things; it was
like finding a language that can express the things I had been trying to grasp, and then apply the tools hundreds of mathematicians and logicians had developed, to solve more and more complex problems.
(I myself am "bad" at writing formal proofs. For some reason, I seem to see and think a bit differently about certain things, so those that do formal proofs often can usually simplify them into much simpler and easier to understand expressions. But that is not the same thing as my proofs being
invalid or
wrong or
incomplete, just, uh, "artless".)
While those formal proof things can
seem a waste of time, they do actually come packaged with extremely powerful tools in algorithm analysis and development, which will allow you to cut some problems into trivial bits, and do so with simple code that you probably wouldn't understand without those tools.
Recursion and loop analysis, algorithm time and space complexity, and logic and graph theory based approaches to certain problems come to mind.
Here is a practical example:
Consider a large chessboard-like grid. You have different colored pebbles on the grid. Whenever two pebbles of the same color are in adjacent cells, they belong in the same group. The question is, how do you count the number of groups on the grid?
The solution is a simple abstract data type called
disjoint-set, preferably with path compression. When you start with an all-singleton set with each pebble described in the set, then apply Union to each pair of adjacent pebbles, you end up with a set where each group is distinct. Flatten the disjoint-set data structure, and you have a group identifier for each pebble. To count the number of groups using this approach, you need two numbers (between 0 and the total number of pebbles, inclusive) per pebble, of space. Time complexity is less than you'd expect, but is a pretty complex topic.
You can use this data structure without understanding why it works, but unless you know its limitations and requirements, you won't know if or when it fails.
It will be almost impossible to say (without it being a pure guess) whether it can be used to solve some problem more efficiently/faster than some other method. With algorithm analysis, you can at least easily compare
asymptotic behaviour.