Author Topic: How does an interface stay synchronized without a clock line?  (Read 1736 times)

0 Members and 1 Guest are viewing this topic.

Offline BootalitoTopic starter

  • Supporter
  • ****
  • Posts: 116
  • Country: us
    • EasyEda.com/TerryJMyers
How does an interface stay synchronized without a clock line?
« on: November 29, 2017, 12:59:10 pm »
So I understand how to ships can communicate when one of the chips is wiggling a clock line, but what about rs-232, rs-485, DH+, Etc, when both chips must know what the baud rate is beforehand?  I know I just answered the question: both chips must be pre-configured / programmed with an already agreed-upon baud rate. But what I'm really asking is a bit more subtle.
Even though the baud rate is known beforehand, there is still no known starting point to start the clock. Or is there?
As an example / metaphor, if you and I wanted to talk to each other verbally and we agreed to talk at one word per second, how do we know when second zero is. It seems without that additional piece of information I could be listening to you trying to talk between second 0.5 and 1.5 and 2.5 and 3.5, etc, and I'd only catch the end of a word and the start of the next word.
Is Fair an initial handshake during serial communication so that second zero is agreed upon? Or does this method inherently have a natural speed limitation? For example in my talking metaphor above, it would be no problem if the agreed-upon time frame was 2 seconds or longer but we still said the word within one second, Nyquist theorem and all. But even still that's non-deterministic so it would seem no matter how slow you went you would still screw up one out of every thousand or ten thousand words or so. Which of course will be fine for humans as we can get things from contacts but not for computers
 

Offline Nusa

  • Super Contributor
  • ***
  • Posts: 2416
  • Country: us
Re: How does an interface stay synchronized without a clock line?
« Reply #1 on: November 29, 2017, 01:37:36 pm »
Are you dictating or letting auto-correct run wild? You've got a few whoppers that feel computer-related, like to ships instead of two chips and contacts instead of context.

In general what you're talking about is asynchronous communication protocols where data is sent out in packets.

https://en.wikipedia.org/wiki/Asynchronous_serial_communication
 

Z80

  • Guest
Re: How does an interface stay synchronized without a clock line?
« Reply #2 on: November 29, 2017, 02:05:10 pm »
RS232 and asynchronous serial comms are as old as the hills and there are lots of resources available to read like the link posted above.
Simplistically
Yes both ends need to use the same clock rate
Both ends need to use the same frame format
A start bit is used to 'synchronise' both ends and start the transfer
A stop bit is used to end the transfer (to ensure the start bit happens)
 

Offline NivagSwerdna

  • Super Contributor
  • ***
  • Posts: 2495
  • Country: gb
Re: How does an interface stay synchronized without a clock line?
« Reply #3 on: November 29, 2017, 02:09:17 pm »
Firstly the baud rate need not be necessarily agreed beforehand since the receiver can use the received bit rate to derive it...

Secondly, the start bit of the frame allows the reception to be synchronized.

Finally, although both ends will have clocks which are used to send and receive these clocks will both have errors relative to each other.... as long as these errors are reasonably small the receiver should be able to receive a frame.
e.g. when using ceramic resonators the clocks can be too inaccurate to facilitate high speed async communications.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9889
  • Country: us
Re: How does an interface stay synchronized without a clock line?
« Reply #4 on: November 29, 2017, 02:35:09 pm »
An important point to remember:  The interface doesn't 'stay' synchronized.  Synchronization occurs on every  symbol (character, byte, whatever).  The purpose of the Start bit is to kick off timing for a new symbol.

The sample clock usually runs at 16 times baud rate and the sample is taken in the middle of the 16 clock interval.  There are implementations where the clock runs at 8 times the baud rate.
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8637
  • Country: gb
Re: How does an interface stay synchronized without a clock line?
« Reply #5 on: November 29, 2017, 02:50:01 pm »
The vast majority of digital communication requires a clock. If there isn't a separate clock signal, the clock information is embedded in the transmitted signal:
  • There may be only partial clocking information, such as in asynchronous serial communication, where the clock is the transition from the stop bit (or stop bits) to the start bit, for each transmitted character. The data bits between the start bit and the stop bit have to be sampled by a local free running clock at the receiver. That requires the free running receiver clock be run at a frequency that is within maybe 2% of the frequency of the transmitter clock, so the receiver can't slip too far out of sync over the duration of a received character. The speed of the transmit and receive clock is normally pre-selected, although the automated detection of one of a few possible bit rates at the receiver is often possible. A nice thing about this approach is you can start and stop transmitting characters whenever you want. If you run out of things to sent, you just wait until there is something more and immediately send it.
  • There may be significantly more clock information, so a local clock at the receiver can be synchronised to the clock at the transmitter, by using the embedded clock information in the signal. It may be possible for the receiver to have only a very rough idea of the transmitter's clock speed in a scheme like this, if there is enough transmitted clocking information for the receiver to lock to. The clock usually needs to be continuous, and there needs to be a way to synchronise to the start of the data. Various protocols exist for this kind of synchronisation. HDLC is an example of a widely used protocol which embeds enough clocking information to allow clock synchronisation at the receiver, and synchronisation to the start of the data bits.
  • There may be clock information with every bit of the transmitted signal, such as in the FM signals used in early disk drives, where every other pulse is a clock pulse, and the intervening pulses are one or zero to indicate the data values. In these schemes you can decode at the receiver without any real knowledge of the transmitter's clock speed. That allowed the early floppy disk drives, which often had massive mechanical wow and flutter, to offer reliable decoding of the replayed signal, by continuously tracking the ever changing clock rate.
A few types of digitial communication don't require any clock, because the signal is just something like a status flag. For example, an alarm signal just needs to be in one state under normal circumstances, and in the alternate state when there is a problem.
 

Offline Nitrousoxide

  • Regular Contributor
  • *
  • Posts: 156
  • Country: au
Re: How does an interface stay synchronized without a clock line?
« Reply #6 on: November 29, 2017, 03:25:28 pm »
It is possible to recover a clock from a data signal, it all depends on the line coding that is used to transmit the signal. Typical digital logic is unipolar NRZ (non return zero). However, it is not possible to recover a clock from such signal and thus a different line code must be used, this is where Isochronous self-clocking signals come to use. A common implementation is Manchester code. The clock can be recovered from the received signal through the aid of a PLL.

As per indication per symbol, start and stop bits are used as mentioned before.

https://en.wikipedia.org/wiki/Line_code
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: How does an interface stay synchronized without a clock line?
« Reply #7 on: November 29, 2017, 04:26:42 pm »
UART (Commonly used on an RS232 electrical layer) is asynchronous in that it relies upon a start bit and then the clocks being close enough to allow recovery of the next 8 bits or so, but a lot of the other protocols are synchronous with an embedded clock (AES/SPDIF/SDI/SATA/PCI-e, the list goes on) these find a way to embed the clock into the data stream at least enough so that if you know what you are expecting you can recover it.

Manchester is common, but you also see cleverer things like 8B10 and related things. Basically the key is that you make sure the transmitter generates edges in the transmitted data at some maximum interval, you can then use these to synchronise.

Some modern things are getting so fast that you really, really want the clock to be on the same pair as the data because doing anything else turns into a timing nightmare. Consider that a 6Ghz PCI-e lane has a new data bit about every 4cm along its length.... And there are things much faster then PCI-e out there (Broadcast video sometimes runs 12Gb/s, and there are 25Gb/s transceivers out there).

Regards, Dan.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16607
  • Country: us
  • DavidH
Re: How does an interface stay synchronized without a clock line?
« Reply #8 on: November 30, 2017, 07:27:11 pm »
For asynchronous serial with a start bit, the receiver's clock is a multiple of the baud rate, commonly 8 or 16 times, so synchronization happens to a fraction of the width of one bit.  Often the receiver samples the data on 6 of 8 or 14 of 16 multiplied clock edges and then takes a majority to ignore noise.

Synchronous serial with an embedded clock phase locks the receiver clock to the transmitted data.  Special precautions like manchester encoding or 8B10B are taken with the transmitted data to make sure that enough clock edges are present to allow this despite any possible data pattern.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf