UART has "packet" size of at most 8-10 bits (usually). It's delimited by start and stop bits decoded by the peripheral.
SPI has arbitrary packet size, many kilobytes are possible. It's delimited by the hardware nSS signal, hopefully decoded by the peripheral but beware, some SPI peripheral implementations on some MCUs suck.
"Packet" refers here to the hardware-provided delimitation. This is highly important; if you don't have it on hardware, you need to write your own. Conversely, if you have it provided for you, you don't need to reinvent the wheel, and your code base will be simpler and more robust.
So it's always simpler if your higher-level communication can use the hardware-provided low-level delimitation directly. With SPI, this is possible. With UART, you need to write your own protocol for delimitation in most cases (only rarely is one byte per communication enough); minimum workable design consists of start byte and packet length. With SPI, this isn't needed. So SPI may be simpler than UART! It depends.
TCP is like UART, it's called a stream of bytes (octets), "packet" is 8 bits (fixed size). Underlying hardware packets are abstracted away. As with UART, 8 bit commands / data is rarely usable, so you need to build higher level message delimitation on top, using something premade or writing your own. This is extra work, but with TCP, you can work reliably with the very simple messaging schemes because TCP is guaranteed to never lose, duplicate or corrupt a byte. But getting a TCP stack running on MCU just to share a bit of data between two controllers sounds like killing a fly with a cannon.
SPI isn't any more susceptible to interference than any other CMOS push-pull communication link. If SPI fails, external RAM or flash fails as well. Yet nctnico is fine using external RAM or FLASH, but not fine using SPI. This is highly illogical (and discussed to death already). Yet real computer systems do, for example, run the program out of external RAM where a single induced clock system kills the whole system. The usual way is to look at the root cause of what is causing the signal integrity issues, and correct the SI.
Special hardened systems combine good signal integrity with extra measures for error correction and recovery, but it seem nctnico is substituting signal integrity with error detection and recovery and only doing that with SPI leaving all other CMOS links susceptible to bad SI. Go figure.
UART does have extra robustness built in because the typical MCU UART peripheral samples the signal 8 or 16 times per bit. SPI and other clocked CMOS buses just clock the data bits once on the clock edge, so signal integrity is indeed important. This isn't an excuse for accepting poor signal integrity, though, and it doesn't take much extra SI problems to kill UART, as well.
Note that communication schemes that just "spam" the data are easiest to deal with; for example automotive systems (that utilize CAN as the communication layer, incidently designed for such scheme) work by repeating the measurements, settings etc. on the bus frequently enough; intended receivers implement timeouts in case they don't get the important value they need. Same idea can be built on SPI. Even if you have a rare incidence of severe noise corrupting the transfer (for example, coming from a severely non-compliant device) causing a missing or an extra clock cycle, the flow control doesn't go haywire because each packet is delimited by the hardware nSS; at most, that particular packet fails. Proper SPI slave resets its rx state when nSS goes to inactive state, starting receiving the next one correctly. Adding CRC is not a bad idea if you are working with a critical system in iffy conditions. Obviously, do that with UART or any other physical layer, as well. CAN offers this on the protocol/peripheral level for you.