I've been thinking about this some more, regarding the context-dependent 0xFF non-discarding on binary packets.
It is easy enough to implement, by detecting the packet header and then not discarding any FFs until the packet length has been fulfilled.
The problem is that if you did get an underrun you will never know, other than the checksum will come up duff (well, 1 in every 256 duff packets will have a valid checksum).
If the data sheet guaranteed a minimum internal buffer fill speed, then you could time the SPI precisely (with a timer, say) to make sure the SPI actions do not happen faster, and also do not happen slower than the internal buffer size would overflow.
The problem is that U-BLOX do not specify the internal buffer fill speed, and do not specify the buffer size.
One could assume that by the time the header is available, the whole buffer has been filled, but that is really unlikely.
So basically you cannot develop a product which reads non-ASCII packets using SPI and be sure there is any kind of margin.
One day I will be looking at their 4G modules and if they do the same thing, that won't work either because UDP packets definitely can contain FFs.
The other thing I have just found is that the initialisation can also contain an FF (see 1st line):
uint8_t neo_m9n_init_data[] =
{
0xB5,0x62,0x06,0x01,0x08,0x00,0xF0,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0x23, // disable GGA
0xB5,0x62,0x06,0x01,0x08,0x00,0xF0,0x01,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x2A, // disable GLL
0xB5,0x62,0x06,0x01,0x08,0x00,0xF0,0x02,0x00,0x00,0x00,0x00,0x00,0x00,0x01,0x31, // disable GSA
0xB5,0x62,0x06,0x01,0x08,0x00,0xF0,0x03,0x00,0x00,0x00,0x00,0x00,0x00,0x02,0x38, // disable GSV
0xB5,0x62,0x06,0x01,0x08,0x00,0xF0,0x05,0x00,0x00,0x00,0x00,0x00,0x00,0x04,0x46, // disable VTG
0xB5,0x62,0x06,0x01,0x08,0x00,0xF0,0x08,0x00,0x00,0x00,0x00,0x00,0x00,0x07,0x5B, // disable ZDA
How the hell does the module deal with that, when all FFs sent to it are discarded?