While many people has no doubt uses variables named "E", "m", "c" in mathematical formulas, would an LLM ever arrange them as "E = mc2", along with justification, before the first human did that?
Not sure where this argument is going: clearly we developed -- well, give or take what meaning is attached to those variables, it could be many things, but we can take the most familiar case to us here, relativity -- well before LLMs. It seems likely that any society would; while relativity is not a strict prerequisite for advanced computers, it seems wildly unlikely it won't be uncovered along the way.
Or if you mean, not strict chronologically, but just at a given time where both coexist [humans and LLMs, give or take the potential knowledge of relativity]; clearly the presence of one, in the training data of the other, makes it much more likely -- give or take coherence length of course. But then ordering doesn't make sense...
Or perhaps, assuming a state of society where humans exist and relativity doesn't, but the conditions are ripe to discover it; versus a society in the same state, but somehow equipped with LLMs, trained on current scientific and mathematical progress and investigated as an oracle upon that; perhaps, as a matter of incremental change. Structurally significant changes seem unlikely (like the choice of tensor notation in GR), but it could still bring insights and attention to certain overlooked combinations of ideas.
I don't know that an LLM can't produce semantic insights as such. (Is there even a way to prove that?) Perhaps a more complex design is required (not merely scaling things up), than what we're been trying so far. Eventually, we will crack the problem of feedback poisoning the training, though whether we still call such a model "LLM" at that point, I have no idea.
Or, put another way: what structural differences exist, between biological neural networks and current-tech equivalents, that specifically implement or rule out (as the case may be) such a nature?
I suspect no one knows enough about either case, as yet, to make a reasonably confident and meaningful statement on this. But I haven't been following either subject in detail, and I know there's been recent developments in e.g. human connectome.
The other absolutely key point is that you can ask a person why they chose their course of action. That is still - after 40 years - an LLM "active research topic". Translation: good people have repeatedly tried and failed.
You can ask a person that, but you may not get a truthful answer. Or even a meaningful one.
Even just asking for a process of reasoning, presupposes one existed; it's a loaded question! A bias we need to be extremely careful about indeed, when working with systems that don't learn and "think" the way we do.
Often a much more useful line of questioning is: "How do you feel? What caused you to feel this way?"
(It will be, it seems, a few degrees higher complexity (in whatever structural or scaled way applies) before an LLM can generate reasonable reflections. Perhaps a sort of self-awareness can arise out of feedback loops, without corrupting overall system state; the real challenge is training such a system without an internet-sized corpus of self-reflection!)
Most humans do happen to be at least conversant in syllogisms (if not strictly technically accurate ones), but even when so, people are often not aware of their own thought processes. Emotions and hormones precipitate actions from stimuli, but fail to produce any such (strict) log of [rational justification] --> [action] --> [further justification] (etc.). Or worse still, they do, and the reasons are completely irrelevant on closer inspection.
The subconscious doer / conscious observer model shows up from time to time, with particularly exaggerated discrepancies in pathological cases (perhaps, disorder or trauma breaking connections between parts of the brain), such that the consciousness confabulates its own explanation of actions without knowing how or why they were taken. When these two aspects are working together, we perceive a convincingly coherent, cooperative whole (or, so we are conditioned to!), but it's equally possible that that's all we ever were: a pasted-together hack of very (mutually-)convincing behaviors that happens to do well enough, and -- importantly -- that evolution can turn enough knobs on, to optimize for survival under so and so many conditions.
It seems to me there are two loud camps on the LLM conversation: those that consistently deny that an LLM (as we know it currently) can surpass a human in (a) any or (b) all traits, or (c) that any model (current or foreseeable) ever can; and, those that are head-over-heels impressed with the successes, so much that they are willfully ignorant to the blemishes of current tech (but, point in case, that error is indeed decreasing as models improve, and it's yet to be seen where it ends).
The deniers will willfully refuse to accept the inevitable advance, until they are subsumed by it, obsoleted. The enthusiasts play the gamble of riding the curve, hoping that those current blemishes are slight enough, or can be ignored to adequate (read: marketable) satisfaction by enough users, to point to it and say "see how cool this is?", without it crashing and burning in the near term -- or, to hope to make big bucks by investing in it. Meanwhile, the enthusiasts may hope it keeps going up long term; but that's a devil's bargain, as their best long-term prospect is also to be obsoleted by it.
I would humbly suggest a centrist-flavored trilemma, which -- granted, as centrism often is, may reflect my abject ignorance on this topic -- but to say that both of these things can be true to varying degrees, and that there exists a far more terrifying insight that both camps have missed and should instead be focusing on:
Sooner or later, we
will realize the total extent of the human psyche; we will map out the brain, "explain" consciousness, and -- as much by designing and probing these models, and how they reflect upon (analogize, or indeed even emulate) the behavior and nature of the meat kind, as well as by probing the meat kind directly. A dark, Lovecraftian sort of horror, awaits us here: there will be no discovery of "soul", no deep spiritual insight, only the deep and forbidden knowledge of the self on a fundamental working level (if to a very loose level of abstraction, as a system cannot possibly understand itself *fully* from within itself, but at certain levels of understanding, or approximations thereto, sure). Rather than the maddening infinite, we will be faced with the utter banality of algorithmic existence. Will this drive some to solipsism? Perhaps others will seek refuge in belief, denying until the end that such knowledge can even exist? Will some simply make peace with existence as a finite and knowable being? Perhaps there will there be a spiritual awakening, working outward rather than inward: having disproven metaphysics, we might redefine "spirit" or "soul", "consciousness", "free will", etc. into descriptive processes or perceptions, rather than the unfalsifiable belief systems people create around them today.
Tim