Programming languages are an entirely artifical construct. In the beginning, they must follow somewhat the structure of the cpu, which itself is made with many compromises and arbitrary decisions.
That is completely incorrect. See e.g.
programming language theory,
lambda calculus, or
Lisp,
Haskell,
Forth, or
PostScript programming languages. There are no such universal ties at all.
Of course, many programming languages have
some ties to the hardware they are run on, but there is nothing universal about those; they vary a lot. You could say the concept of
stack is the closest thing (as most programming languages and hardware architectures rely on such an abstract data type), except that there exists both hardware and programming languages that do not have any stack (or comparable structures) at all. (
Shaders are probably the most common ones.)
The exact same
"artificial construct" argument applies to mathematics. Look at
metamathematics to understand why. Everything we can conceive, is "an entirely artificial construct". Every single thing, even mathematics, include underlying assumptions or axioms that are not obvious, but must be accepted as Given Truth for the thing to have any meaning, applicability, or utility. Everything a human being is able to "know", is filtered through their experiences and limited understanding.
There is nothing arbitrary about physics.
Of course there is!
One of my favourite examples is basic optics. You can think of light rays (simplified models of photons, that exclude absorption and emission events; as if we used perfect mirrors or lenses) as vectors from a light source to a surface (or vice versa), that reflect (with original and reflected rays being on the same plane with the reflecting surface normal, and those rays having the same angle to the surface normal) and refract (as defined by
Snell's law) at surfaces between isotropic materials. Or you can think of light rays as a geometric object that minimizes the time of flight between any two points in space, as described by
Fermat's principle. The two approaches are mathematically equivalent, in that the former can be derived from the latter,
so both equally describe physical reality. The first is easy to apply using basic vector calculus, the latter is related to
calculus of variations, an equally powerful but very different mathematical tool.
It is the
underlying mathematical model that is not arbitrary. The way we humans interpret that model, allows us to see different facets of the behaviour of the model. We are not intelligent enough to see all those facets at the same time, simultaneously.
Because programming languages are created by humans, their construction "encodes" one or more of such facets. Researchers can use tools like programming language theory or lambda calculus to analyse and compare programming languages, and try and reveal those facets. Most programming languages have retained their viewpoint during their evolution, but some, like
PHP in particular, have either added or switched their viewpoint. (Adding object-oriented features is the most common one.) One of the reasons PHP is so reviled, is that humans are not good at changing their viewpoint, so code that is written in a mismash, is perceived as "messy", "unclean", and "difficult to understand".
Note: The core of my argument is that there exist more than one viewpoint to the same "fact" or phenomena. Viewpoint is a human tool to abstract and limit the "fact", so that our limited perception and rational logic abilities can utilize it. When anything is codified by a human, that viewpoint is embedded in the definition. My claim is that the ability to switch viewpoints, especially the viewpoint to the same "fact" or phenomena, enhances the usefulness and possibilities for application. Thus, limiting oneself to a subset of viewpoints limits one in both understanding, and the ability to utilize the "facts" and phenomena.
Understanding newtonian physics and thermodynamics has broad ranging benefits to logical thinking and design and engineering and problem solving.
I believe that is the effect of having learned a completely new set of viewpoints, not because of the particular set of axioms, laws, and phenomena involved.
Perhaps he imagined that one day every man, woman, and child would be able to see the world they way he did.
Do you see what you did there?
One could say they can, if they want to, but that is probably not true. I believe there is a large fraction of people who are unable to change their viewpoint on anything: they are essentially "stuck" in the viewpoint they get when they first learn a thing. I don't think it is related to intelligence per se, but to some fundamental difference in how people learn and integrate things into their understanding; closely related to
rote learning.