We have tried every half-assed scheme for education except 'algorithms'
Algorithms were tested too. See teaching machines in first half of the 20th century. Reiterated each generation. My times being connecting holes with wires to match answers to questions, educational video games, and multimedia encycolpedias. The next revolution in teaching, praised by self-appointed experts, people running around with a new shiny hammer and searching for nails to drive, in the end addressing non-existent problems.
Do I have any reason to think the story will not repeat with ML algorithms, at least as they are today and can be in the near future? </rhetorical_question>
At some point in the future, with much more complex smortnet architectures, I believe automated teaching will be possible. But not with stuff like GPT, even if it improved tenfold. It’s not a matter of performance, but a mismatch between their operation and the problem to solve.
Where existing machine learning solutions could be used
as auxiliary tools in teaching? Spotting grammatical or spelling errors. But… this is already solved with a much more appropriate algorithms. Indicating errors in solutions. But that would be of help to teachers, not to kids. Unless you want to teach children to never trust the machine, leading to a situation resembling that of when electronic calculators were introduced. The calculator ladies were using a device and then repeating calculations by hand… to ensure there was no mistake in the machine. Also note these are classifiers, not generators. For generators I see even less use. Perhaps providing hints to make learning more effective? Helping with pronunciation by emphasizing mistakes? Being a simple data retrieval engine with natural language interface (in 1/3 of cases answering “please ask your teacher”)?
The thing changes a bit, if talking about mature people. Ones that are conscious about their learning process and can be properly critical of feedback they receive. But not schoolchildren.
Coincidently: today I given to ChatGPT an example of an implication paradox and asked it to explain, why this is valid despite intuitively being wrong. It did beat 99% of the human population, because it did not answer “what the hell is an implication.”
Other than that it simply started repeating that it is ok, because implication is defined like this in logic. So much for being helpful in teaching. And I say that, despite being aware that majority of maths teachers would not be able to give a better answer. But if we are talking about replacing teachers, the algorithm should do considerably better.