It seems to me that the issue of synchronization is not well developed enough. I am considering a situation where the listener can synchronize very quickly (like cw). Then the synchronization sequence will need to be inserted after each block, but it is long and this is ineffective. It would be nice to add a few bits to each block to make synchronization easier. In this case, a long synchronization sequence can occur at the beginning of the transmission. It can also be inserted into the stream periodically, but not often. Thus, if there are many errors in the channel, then the listener will wait for a long sequence to synchronize. And if there are few or no errors, then it will be able to synchronize quickly using additional information from the block.
Thanks for taking the time to look over this.
So let me explain to you my thinking behind this. I agree that there may be something lacking in the synchronization procedure. CW inserts a longer pause than the pause between dit and dah to indicate the change to a new letter, and of course has one of the better signal processing devices available to decode it
.
I was thinking that like other modes such as RTTY45 or PSK31, the user would have to specify the base baud rate. The receiver will already know the nominal period between possible bit transitions. For example, if the base baud rate is 100 bits/s, then the receiver will nominally expect a transition to occur around 10 ms since the last transition was detected, or multiples of 10 ms. Since only five zeros or ones can occur in a row, the maximum time between transitions is 5 bit intervals or 50 ms unless a resynchronization cycle is occurring. The receiver can have a software PLL that runs at a nominal 100 Hz and has its phase updated (added or subtracted) when each new transition is detected to update when the next transition should be expected. The phase would only be allowed to be updated a limited amount per bit, for example 0.1 ms, to account for drift between the clocks of the transmitter and receiver. Then, for example, the fractional stability per bit required would be a maximum of 0.1 ms/10 ms or 1%, or for 5 bits would be 0.1 ms/50 ms or 0.2% or 2000 ppm.
Here I am relying on the fact that the microprocessor, especially if it uses a quartz crystal time standard, can generally have 100 ppm time accuracy or better. For example, some (but not all) Arduinos use 16 MHz crystal (others use a ceramic resonator). I am not sure if that is too much to require, since some microprocessors use RC internal standards or ceramic resonators. But let's assume a crystal time standard of 100 ppm which should be readily achievable, and many crystals are significantly better than this. Assuming the correct bit transition time can be established at the beginning of a message, two 100 ppm time standards can drift a maximum of 200 ppm relative to each other. This corresponds to 0.002 ms in a 10 ms interval, and 0.01 ms between 50 ms transitions. Therefore not too much correction per bit should be required. Multipath, QRM/QRN, and fading will produce false transitions, so I am relying on the fact that the transmitter and receiver, if they have reasonably accurate time sources, once synchronized, should be able to stay tracked within a fraction of a bit interval over thousands of bits.
Of course, initial synchronization is then very important. What would be required to initially synchronize accurately is a bit pattern with an autocorrelation that has a sharp peak, so that the peak can be located accurately to establish the time standard. For example, a Barker code (
https://en.wikipedia.org/wiki/Barker_code ) could be used. However, this autocorrelation may be a computationally expensive operation. A periodic bit pattern 111000111000 etc. has a sharp autocorrelation peak but sidelobes corresponding to the mismatches of the pattern by one or more periods. So as long as the receiver doesn't lose track of how many periods have elapsed in the synchronization signal, it should be ok. It might be that three periods are not enough and more are required during synchronization to ensure a lock onto the transition edge.
I am open to adding bits between the 30 bit blocks but they would somehow need to be identifiable as separate from the blocks themselves. I am not sure how to do this, but I will consider ways that it can be done.