It's actually beneficial! You can send a byte every 3 months if you wish, there no timeout unless made in the software protocol.
At hardware level, only when a transfer started, it must keep going until the entire byte is sent.
Usually the baudrate frequency has some error, specially with 115200, it's weird frequency so you rarely get it perfect with 8-16MHz reference clock.
For small transfers there's no problem up to 3% error or so, as the receiver has time to resync between transfers, or if the sender is slighly slower than the receiver.
But if you pack the transfers without any gap between them and sender is a bit faster, let's say 115800 instead 115200, the start bit happens sooner every time, until it'll eventually drifts too much for the receiver and causes a transfer error.
For example, I recently did this sending non-stop data using DMA, there was a baudrate error of 1-2% or so.
The first 40 chars or so (I can't tell right now, just an example) went perfect, then got garbage.
Either adding a small software delay of 1-2 bits after few transfers, setting the sender baudrate error to be slower, or perfectly matching the receptor and sender (For example, both @ 1Mbit) worked properly.
The attached picture shows a very exaggerated example, but this can actually happen.
The first two bytes are correctly received but then havoc starts. So sometimes it's important to allow some time between stop and start bits.