Let's continue the useless discussion. Although.. I mostly agree.

Comparing performance measures of different architectures is hard, or even useless, because they are not the same.
If different architectures can be applied to the same problems, their performance is comparable, by definition.
I agree with your statement. I missed the word 'different measures' there. My bad
In that sense, even IQ is debatable whether it really represents anything. Some IQ tests include language, numbers, etc. Does it also mean that people who have dyslexia or dyscalculia are dumber?
In normal times it would be uncontroversial to state that an intellectual disability was disabling. We live in more interesting times, where stating the truth unfortunately requires more courage.
I agree. Someone with dyscalculia will likely not succeed in a PhD in math. Hats off to anyone that can pull it of nonetheless (likely won't happen). They will likely struggle to do basic budgeting and find out what to pay for groceries.
But that does not mean their IQ number is absolutely low (as in disabling), even though relatively we can find correlations for various conditions. For that, I think that truth and courage is not the right word; let's say we should be more realistic. We tell our children they should chase their dreams and can be anything; but a doctor-student that faints when it sees blood, will likely not progress very far.
I doubt both should have an effect on the score, however, the way they consume and process information is 'different'. Herein lies a crucial observation that expressing IQ as a scalar value is kind of useless: some people have perfect memory while others can string together many different concepts while needing to look up everything they use.
Unlike Borges's tragic Funes, no one has "perfect memory". The patterns of information processing and retrieval in humans are universal, not "different". Individuals rely more on some patterns than others.
I was exaggerating "perfect" there.
However, I do think that people on the autistic or schizophrenic spectrum will tend to disagree that our information processing is universal. Many brilliant mathematicians or scientists have suffered from such a condition.
As far as I read into, I believe that IQ is a weighted average of several of these factors. Important pillars of good IQ performance are then short and long term memory and the ability to associate different concepts with the observations presented at present time. I think the latter is quite a fundamental measure of 'brain performance', as learning and understanding is basically all about it.
You are incorrect, and perhaps mislead by your reading. IQ is an attempt to estimate the factor G, which is the shared factor in mental ability across different domains. Individual performance in one domain correlates strongly to other domains. "Learning and understanding" is meaningless without an operational definition.
I agree, I am not an expert. I was not out to give a formal definition. All I wanted to say that in general that those factors are quite important yet general to express work or academic performance. However, I was probably misled by my own memory: I do recall that the various factors have strong correlations. If you do well in 1 area, you will likely do well in other areas.
Then trying to compare brain vs computers becomes hard. Computers have memory storage expressed in bytes and computational power often expressed in FLOPS. However, in a sense even our human cognitive computations are also associations. We have learned that the numbers go like [1, 2, 3, 4], etc. and we have learned what the mechanics of addition, subtraction or multiplication is (e.g. that 2-1=1). However, if one had learned that numbers went like [1, 3, 2, 4], then he/she may still be able to perform computations quickly (e.g. 2-1=3), but perhaps not correct to our typical conventions. Likewise in computers, 2's complement is not the only way of storing an integer, and relearning number systems and observing redundant number systems is quite an interesting philosophy on how we can treat computations more efficiently.
I'm not sure how to properly respond to this. The numbers in a number system are defined by their relationships, not by the squiggles we use to write them. This is how all early mathematics education is structured, for example with the use of counters to demonstrate identities and inequalities. The figures that represent the numbers are conventions, but the numbers themselves are not. So if you exchanged the numerals "2" and "3", that is simply a different way of writing. The properties of the successor of the successor of the additive identity (conventionally known as "2") are not changed by this. This does not give rise to a "different method of computations".
The method of representation of a number in a computer's memory does not define any mathematical structure. It is an implementation detail much like ASCII or EBCDIC. The implementation of the machine's operations had better match this representation, to give "correct answers" in mathematical terms. But there are many cases where the answer is somehow exceptional, as with overflow or rounding. The machine is therefore only able to calculate within a limited domain (because unlike ideal Turing Machines, it has limited memory to work with). Whether the answers are "correct" is something the computer itself can never know. It requires an outside observer to compare the results to mathematical identities. (Yes, the outside observer could be another computer.)
Okay; let me rephrase. All the things we create are associations. The things we find are mappings of how nature works with our associations, eventually converging to axioms of math that are universally true. We expect to have the same math axioms as the aliens that we find (or they find us). But the way we write numbers, use the radix-10 convention, ASCII, etc. will likely be completely different.
However, I think that we humans only deal with implementation-facing details in our head. Most of us will not computationally use the axioms of math in our head. That's what I meant by that most of us do computations by associations. We have learned mostly created implementation details; most of us stay within a radix-10 number system (a domain) in daily life. But who knows; maybe a radix-12 or radix-8 system would have worked way better? But does that make an individual
more intelligent if it can do radix-8 computations in real-time? I doubt it, however, I do not have any argument to back it up.
What I mean to say is; the concepts we treat as cognitive computational power are, in a sense, also memory lookups of a convention that we take for granted. The brain is a huge association machine, which is more like a massive FPGA with lots of block RAM spread around..
Most computer models of "cognition" are also based on wide association nets, and have been since the beginning of the field. Newell and Simon's Logic Theorist (1956) operated on maps of expressions in propositional calculus. Since computers are universal (Church-Turing Thesis) they can work on associations, and their performance on this task is measurable against humans. Logic Theorist discovered a proof of one of the theorems in the Principia Mathematica that was shorter than the one found by Whitehead & Russell.
I agree. Most models look like trained associations.