That is mainly a problem caused by the Linux kernel developers by changing internal structures at will without really thinking things through.
As I just explained, "without really thinking things through" != reality. It is actually a very common, very silly misconception.
If the Linux kernel developers did not change internal structures as needed, it would not have the capabilities or support it does have today. Simply put, the lack of any rigid internal structure is the cost of being so versatile. Feel free to disagree, but the above is the actual reason among Linux kernel developers; the people who actually do this stuff day in day out.
As to SBCs and Linux running on SoCs, I still haven't seen a clean, well-ordered toolchain/kernel sdk from a commercial vendor. They look more like Linux newbie development trees, and not something put together with a sensible design. Routers are a perfect example. Just compare a "plain" OpenWRT installation to say Asus router images or Realtek SDK to see for yourself. The latter are laughably messy, like something put together by a high-schooler. Which is also the reason why I wish more people –– especially the ones who might become integrators, building such images, at some point –– would learn LinuxFromScratch, git, and historical stuff like Linux Standard Base and Filesystem Hierarchy Standard, Unix philosophy and software minimalism, and the technical reasons why so many core developers dislike systemd. It is not that hard to do Linux integration properly; it is just that newbies (with a proprietary software background) make the same mistakes again and again.
Kernel bugs do occur, of course –– but that is because most developers today, Linux kernel developers included, are more interested in adding new functionality and features than fixing bugs and making things robust. Userspace-breaking stuff gets reverted –– and the language Linus used to use to berate authors making such breaking changes is what so many people complained about! Pity, I liked the sharpness! ––, so it turns out that for least maintenance work, you want to be able to upgrade to newer vanilla kernels, but recommend LTS kernels you test and help maintain yourself. It is minimal work compared to testing and maintaining your own kernel fork.
It might be that NVidia is feeling pressure on this front from the HPC field. It is well known nowadays that if you do e.g. CUDA stuff, bug-wise you're on your own; nobody outside NVidia can actually help you. (There is a related tale of the "tainted flag" Linux kernel uses when a binary-only driver has been loaded, and how the users who think Linux kernel developers should be telepathic and clairvoyant and be able to fix bugs even when the kernel data structures have been accessed by unknown code, believed that it should not apply to NVidia drivers, and is "just offensive"...
See the first paragraph describing this
in the Linux kernel documentation, and consider the precise and gentle language used. Heh.)