General > General Technical Chat
Whats the clock speed/flops per second of our brains?
helius:
--- Quote from: hans on February 10, 2021, 02:37:11 pm ---Comparing performance measures of different architectures is hard, or even useless, because they are not the same.
--- End quote ---
If different architectures can be applied to the same problems, their performance is comparable, by definition.
--- Quote ---In that sense, even IQ is debatable whether it really represents anything. Some IQ tests include language, numbers, etc. Does it also mean that people who have dyslexia or dyscalculia are dumber?
--- End quote ---
In normal times it would be uncontroversial to state that an intellectual disability was disabling. We live in more interesting times, where stating the truth unfortunately requires more courage.
--- Quote ---I doubt both should have an effect on the score, however, the way they consume and process information is 'different'. Herein lies a crucial observation that expressing IQ as a scalar value is kind of useless: some people have perfect memory while others can string together many different concepts while needing to look up everything they use.
--- End quote ---
Unlike Borges's tragic Funes, no one has "perfect memory". The patterns of information processing and retrieval in humans are universal, not "different". Individuals rely more on some patterns than others.
--- Quote ---As far as I read into, I believe that IQ is a weighted average of several of these factors. Important pillars of good IQ performance are then short and long term memory and the ability to associate different concepts with the observations presented at present time. I think the latter is quite a fundamental measure of 'brain performance', as learning and understanding is basically all about it.
--- End quote ---
You are incorrect, and perhaps mislead by your reading. IQ is an attempt to estimate the factor G, which is the shared factor in mental ability across different domains. Individual performance in one domain correlates strongly to other domains. "Learning and understanding" is meaningless without an operational definition.
--- Quote ---Then trying to compare brain vs computers becomes hard. Computers have memory storage expressed in bytes and computational power often expressed in FLOPS. However, in a sense even our human cognitive computations are also associations. We have learned that the numbers go like [1, 2, 3, 4], etc. and we have learned what the mechanics of addition, subtraction or multiplication is (e.g. that 2-1=1). However, if one had learned that numbers went like [1, 3, 2, 4], then he/she may still be able to perform computations quickly (e.g. 2-1=3), but perhaps not correct to our typical conventions. Likewise in computers, 2's complement is not the only way of storing an integer, and relearning number systems and observing redundant number systems is quite an interesting philosophy on how we can treat computations more efficiently.
--- End quote ---
I'm not sure how to properly respond to this. The numbers in a number system are defined by their relationships, not by the squiggles we use to write them. This is how all early mathematics education is structured, for example with the use of counters to demonstrate identities and inequalities. The figures that represent the numbers are conventions, but the numbers themselves are not. So if you exchanged the numerals "2" and "3", that is simply a different way of writing. The properties of the successor of the successor of the additive identity (conventionally known as "2") are not changed by this. This does not give rise to a "different method of computations".
The method of representation of a number in a computer's memory does not define any mathematical structure. It is an implementation detail much like ASCII or EBCDIC. The implementation of the machine's operations had better match this representation, to give "correct answers" in mathematical terms. But there are many cases where the answer is somehow exceptional, as with overflow or rounding. The machine is therefore only able to calculate within a limited domain (because unlike ideal Turing Machines, it has limited memory to work with). Whether the answers are "correct" is something the computer itself can never know. It requires an outside observer to compare the results to mathematical identities. (Yes, the outside observer could be another computer.)
--- Quote ---What I mean to say is; the concepts we treat as cognitive computational power are, in a sense, also memory lookups of a convention that we take for granted. The brain is a huge association machine, which is more like a massive FPGA with lots of block RAM spread around..
--- End quote ---
Most computer models of "cognition" are also based on wide association nets, and have been since the beginning of the field. Newell and Simon's Logic Theorist (1956) operated on maps of expressions in propositional calculus. Since computers are universal (Church-Turing Thesis) they can work on associations, and their performance on this task is measurable against humans. Logic Theorist discovered a proof of one of the theorems in the Principia Mathematica that was shorter than the one found by Whitehead & Russell.
Melt-O-Tronic:
It's fun to watch silly discussions about silly questions.
Helius, if we accept your defense of the OP's question, then what is the answer? :)
helius:
--- Quote from: Melt-O-Tronic on February 11, 2021, 04:15:20 pm ---It's fun to watch silly discussions about silly questions.
--- End quote ---
Sometimes the silly questions can be interesting and produce insights.
--- Quote ---Helius, if we accept your defense of the OP's question, then what is the answer? :)
--- End quote ---
I already gave my answer in Reply #11. Brains are simply very, very, very bad at the tasks that we designed computers to perform. Well, what about other kinds of tasks? There are other kinds of metrics.
Thousands of Logical Inferences Per Second (KLIPS) is a metric of how many (single-step) inferences a machine can make in a second. In the 1980s the fastest computers could hit about 100 KLIPS. Today there are systems capable of 200,000 KLIPS.
How many KLIPS is the brain capable of? That is a much harder question, but since inference is the neuron's function, and there are billions of neurons with trillions of synapses, around a million KLIPS seems like a conservative estimate. Are all inferences created equal? Clearly not, and we lack a good way to compare inferences in computers to those in brains. Neural network systems are trained using a method called "back-propagation" which is biologically impossible, so there is a lot we do not know.
ejeffrey:
--- Quote from: Beamin on February 06, 2021, 01:48:29 am ---How many flops per second.
--- End quote ---
I can do about 1 but not with very high precision.
You will find "serious" people estimating numbers over many orders of magnitude but it is mostly nonsense. We basically don't know enough about how the brain works to even know how to quantify it's computing ability. A lot of these are based on what it would take to *simulate* what we know about neurons but that is like figuring out how much computing power you need to run a
SPICE model of every transistor in a CPU and then calling that the processing power of the CPU.
hans:
Let's continue the useless discussion. Although.. I mostly agree. ^-^
--- Quote from: helius on February 10, 2021, 10:13:08 pm ---
--- Quote from: hans on February 10, 2021, 02:37:11 pm ---Comparing performance measures of different architectures is hard, or even useless, because they are not the same.
--- End quote ---
If different architectures can be applied to the same problems, their performance is comparable, by definition.
--- End quote ---
I agree with your statement. I missed the word 'different measures' there. My bad
--- Quote ---
--- Quote ---In that sense, even IQ is debatable whether it really represents anything. Some IQ tests include language, numbers, etc. Does it also mean that people who have dyslexia or dyscalculia are dumber?
--- End quote ---
In normal times it would be uncontroversial to state that an intellectual disability was disabling. We live in more interesting times, where stating the truth unfortunately requires more courage.
--- End quote ---
I agree. Someone with dyscalculia will likely not succeed in a PhD in math. Hats off to anyone that can pull it of nonetheless (likely won't happen). They will likely struggle to do basic budgeting and find out what to pay for groceries.
But that does not mean their IQ number is absolutely low (as in disabling), even though relatively we can find correlations for various conditions. For that, I think that truth and courage is not the right word; let's say we should be more realistic. We tell our children they should chase their dreams and can be anything; but a doctor-student that faints when it sees blood, will likely not progress very far.
--- Quote ---
--- Quote ---I doubt both should have an effect on the score, however, the way they consume and process information is 'different'. Herein lies a crucial observation that expressing IQ as a scalar value is kind of useless: some people have perfect memory while others can string together many different concepts while needing to look up everything they use.
--- End quote ---
Unlike Borges's tragic Funes, no one has "perfect memory". The patterns of information processing and retrieval in humans are universal, not "different". Individuals rely more on some patterns than others.
--- End quote ---
I was exaggerating "perfect" there.
However, I do think that people on the autistic or schizophrenic spectrum will tend to disagree that our information processing is universal. Many brilliant mathematicians or scientists have suffered from such a condition.
--- Quote ---
--- Quote ---As far as I read into, I believe that IQ is a weighted average of several of these factors. Important pillars of good IQ performance are then short and long term memory and the ability to associate different concepts with the observations presented at present time. I think the latter is quite a fundamental measure of 'brain performance', as learning and understanding is basically all about it.
--- End quote ---
You are incorrect, and perhaps mislead by your reading. IQ is an attempt to estimate the factor G, which is the shared factor in mental ability across different domains. Individual performance in one domain correlates strongly to other domains. "Learning and understanding" is meaningless without an operational definition.
--- End quote ---
I agree, I am not an expert. I was not out to give a formal definition. All I wanted to say that in general that those factors are quite important yet general to express work or academic performance. However, I was probably misled by my own memory: I do recall that the various factors have strong correlations. If you do well in 1 area, you will likely do well in other areas.
--- Quote ---
--- Quote ---Then trying to compare brain vs computers becomes hard. Computers have memory storage expressed in bytes and computational power often expressed in FLOPS. However, in a sense even our human cognitive computations are also associations. We have learned that the numbers go like [1, 2, 3, 4], etc. and we have learned what the mechanics of addition, subtraction or multiplication is (e.g. that 2-1=1). However, if one had learned that numbers went like [1, 3, 2, 4], then he/she may still be able to perform computations quickly (e.g. 2-1=3), but perhaps not correct to our typical conventions. Likewise in computers, 2's complement is not the only way of storing an integer, and relearning number systems and observing redundant number systems is quite an interesting philosophy on how we can treat computations more efficiently.
--- End quote ---
I'm not sure how to properly respond to this. The numbers in a number system are defined by their relationships, not by the squiggles we use to write them. This is how all early mathematics education is structured, for example with the use of counters to demonstrate identities and inequalities. The figures that represent the numbers are conventions, but the numbers themselves are not. So if you exchanged the numerals "2" and "3", that is simply a different way of writing. The properties of the successor of the successor of the additive identity (conventionally known as "2") are not changed by this. This does not give rise to a "different method of computations".
The method of representation of a number in a computer's memory does not define any mathematical structure. It is an implementation detail much like ASCII or EBCDIC. The implementation of the machine's operations had better match this representation, to give "correct answers" in mathematical terms. But there are many cases where the answer is somehow exceptional, as with overflow or rounding. The machine is therefore only able to calculate within a limited domain (because unlike ideal Turing Machines, it has limited memory to work with). Whether the answers are "correct" is something the computer itself can never know. It requires an outside observer to compare the results to mathematical identities. (Yes, the outside observer could be another computer.)
--- End quote ---
Okay; let me rephrase. All the things we create are associations. The things we find are mappings of how nature works with our associations, eventually converging to axioms of math that are universally true. We expect to have the same math axioms as the aliens that we find (or they find us). But the way we write numbers, use the radix-10 convention, ASCII, etc. will likely be completely different.
However, I think that we humans only deal with implementation-facing details in our head. Most of us will not computationally use the axioms of math in our head. That's what I meant by that most of us do computations by associations. We have learned mostly created implementation details; most of us stay within a radix-10 number system (a domain) in daily life. But who knows; maybe a radix-12 or radix-8 system would have worked way better? But does that make an individual more intelligent if it can do radix-8 computations in real-time? I doubt it, however, I do not have any argument to back it up.
--- Quote ---
--- Quote ---What I mean to say is; the concepts we treat as cognitive computational power are, in a sense, also memory lookups of a convention that we take for granted. The brain is a huge association machine, which is more like a massive FPGA with lots of block RAM spread around..
--- End quote ---
Most computer models of "cognition" are also based on wide association nets, and have been since the beginning of the field. Newell and Simon's Logic Theorist (1956) operated on maps of expressions in propositional calculus. Since computers are universal (Church-Turing Thesis) they can work on associations, and their performance on this task is measurable against humans. Logic Theorist discovered a proof of one of the theorems in the Principia Mathematica that was shorter than the one found by Whitehead & Russell.
--- End quote ---
I agree. Most models look like trained associations.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version