See also the question of whether transformer networks can "code."
They do not generate anything new. They can only combine parts from their original dataset; not as in copy-paste, but in "this fits here" fashion.
The end result may look novel, but if the standard US abstraction-filtration-comparison test is applied to both (the "novel" result, and all of the original dataset), you
always find 1:1 correlation. This is why they are transformer networks, and not general intelligence. It is also why transformer networks cannot author patents in the USA, according to recent judgments.
There are very strong arguments for general intelligence requiring the ability to perceive and manipulate the environment (environment including other members of similar species; language being just a tool for manipulating others using the same language). This means that if we survive long enough as an intelligent species, general intelligence will grow out of either embodied robots, or from simulations, not from LLMs or transformer networks.
As to games (9-dan Go master), a pure imperative program can do that, if we were smart enough to write one. We did that for chess already; Go just has much more permutations of the game state.