you chose a dumb protocol
Pffff i like the KISS principle, if it works, it works. If it fails even after a year I analyze and change. If youwant robustness to the max
then create a header with its own checksum if you are so paranoid like the ip protocol and be done with it. My system is wired rs485 and have not had a single fail in 20 years with my dumb protocol. Design your own and be happy, i am happy with my choice.
The simple thing is to use one of the existing protocols that already solves this, not rolling your own either way. And honestly, I don't believe that you have never had a problem if you really did what you describe.
Serial ports can and will generate garbage bytes when the microcontroller resets. The user will restart the host application in the middle of a message. If you have a serial cable with a connector, someone will disconnect and reconnect the port. The problem with using length bytes for framing is that if you loose your frame pointer, you are lost forever until you restart the whole system. So I don't believe that you have used this protocol with out any way to recover synchronization. Your way is probably just not as good as a protocol that auto synchronizes every message. There is no need to use a protocol that doesn't support this. It is trivial to dedicate a character to frame boundaries and then escape it when it occurs in data. If you really care about the worst case overhead where every byte in a message has to be escaped, and you can't just increase your line rate, then use a slightly more complicated protocol.
All more modern / higher level communications interfaces have built in ways to do this. Even GPIB, which is nearly as ancient as RS232 has this with the EOI line. Modern protocols like ethernet, USB, etc. all have built-in message framing. Hell, even async serial actually has this capability, you can generate a BREAK to synchronize the sender and receiver, except that handling of BREAK condition is pretty hit or miss between implementations.