Author Topic: Serial protocol for filling an array from a PC: is there a standard protocol?  (Read 5724 times)

0 Members and 1 Guest are viewing this topic.

Offline cdwijsTopic starter

  • Regular Contributor
  • *
  • Posts: 57
Hi All,

I'm working on a project where I need a serial protocol. The idea is that the uC has a small fifo buffer, and the PC has to keep the buffer filled. The uC program takes one value out of the fifo buffer at a time, about 100 times per second.

I have made a small protocol that works like this:
1) PC sends a message: start
2) uC sends a message: request first datablock
3) PC sends first datablock
4) uC checks if the buffer is empty enough to request the next datablock
5) PC sends the next datablock.
6) PC sends the stop command
7) uC stops.

The uC also sends data back:
a) if there's more than one block of data to send, send the data, and wait until there's again more than one block to send.

While this protocol works, I wonder if there's a standard protocol for this. I figure if I follow a standard, somebody has thought a lot more than me about corner cases. For instance when serial packets are damaged. Also, it could be that other people have already made PC program that use the protocol. Then My uC would automatically be compatible.

Does anybody knows about a serial protocol that does the above?

Kind regards,
Cedric
 


Offline eurofox

  • Supporter
  • ****
  • Posts: 873
  • Country: be
    • Music
You forgot something very important  :palm:

You should check the integrity of the message by adding a checksum that should be checked on reception with reply of an ACK or NACK and resend, a resend counter in case of problem and error procedures.
eurofox
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26891
  • Country: nl
    • NCT Developments
Look for XOn/Xoff handshaking. That is standard.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline cdwijsTopic starter

  • Regular Contributor
  • *
  • Posts: 57
You forgot something very important  :palm:

You should check the integrity of the message by adding a checksum that should be checked on reception with reply of an ACK or NACK and resend, a resend counter in case of problem and error procedures.

I would like to use CRC-16. As far as I know this has the best chance of catching transmission errors, but I think this costs a lot of processing power. Alternative I am thinking about 2 1-byte checksums, one with a XOR of all the previous bytes, and one with an addition of all the bytes. This is very cheap in terms of processing power, but how good is this in catching transmission errors?

Kind regards,
Cedric

 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2095
You forgot something very important  :palm:

You should check the integrity of the message by adding a checksum that should be checked on reception with reply of an ACK or NACK and resend, a resend counter in case of problem and error procedures.

Depends on the application. A short RS232 link in a benign environment won't see a bit error in months. I once implemented a robust master to multi-slave protocol running over RS485. A PC application and USB to RS485 converter was used to test and configure the nodes. I didn't bother implementing any error recovery in the PC application because in days of operation flat out at 115k baud I didn't see one error that needed recovering.

In passing and anything using 'NACK' isn't robust. A validated message can be acknowledged, you don't know what anything else is so how or when do you know to not acknowledge it?
« Last Edit: September 09, 2014, 03:35:21 pm by Rufus »
 

Offline eurofox

  • Supporter
  • ****
  • Posts: 873
  • Country: be
    • Music
You forgot something very important  :palm:

You should check the integrity of the message by adding a checksum that should be checked on reception with reply of an ACK or NACK and resend, a resend counter in case of problem and error procedures.

Depends on the application. A short RS232 link in a benign environment won't see a bit error in months. I once implemented a robust master to multi-slave protocol running over RS485. A PC application and USB to RS485 converter was used to test and configure the nodes. I didn't bother implementing any error recovery in the PC application because in days of operation flat out at 115k baud I didn't see one error that needed recovering.

In passing and anything using 'NACK' isn't robust. A validated message can be acknowledged, you don't know what anything else is so how or when do you know to not acknowledge it?


I wrote communication protocol since the 80's, by using a serial line you have absolutely no warranty that the data that arrive on the other side is correct,  an "industrial grade" system always use an integrity check by adding a byte or 2 bytes checksum.
Even in the most secure wiring and shielding, if someone simply disconnect the cable  :-DD on the other side no problem it will run with the data in the buffer  |O
« Last Edit: September 09, 2014, 04:12:52 pm by eurofox »
eurofox
 

Offline FrankBuss

  • Supporter
  • ****
  • Posts: 2365
  • Country: de
    • Frank Buss
Alternative I am thinking about 2 1-byte checksums, one with a XOR of all the previous bytes, and one with an addition of all the bytes. This is very cheap in terms of processing power, but how good is this in catching transmission errors?
It could miss problems like when the same bit is more than once wrong. I would use CRC8. It is faster, too, just a table lookup:

http://www.maximintegrated.com/en/app-notes/index.mvp/id/27
So Long, and Thanks for All the Fish
Electronics, hiking, retro-computing, electronic music etc.: https://www.youtube.com/c/FrankBussProgrammer
 

Offline cdwijsTopic starter

  • Regular Contributor
  • *
  • Posts: 57
Alternative I am thinking about 2 1-byte checksums, one with a XOR of all the previous bytes, and one with an addition of all the bytes. This is very cheap in terms of processing power, but how good is this in catching transmission errors?
It could miss problems like when the same bit is more than once wrong. I would use CRC8. It is faster, too, just a table lookup:

http://www.maximintegrated.com/en/app-notes/index.mvp/id/27
Thank you for proving XOR-ADD can miss problems. I'll check the CRC8 technique, it looks promising.
Kind regards,
Cedric
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Quote
I wonder if there's a standard protocol for this.
There are LOTS of standard protocols that do this sort of thing.  How about LAPB over Async HDLC?
You should be able to find lots of existing code, because this is essentially the protocol used by amateur packet radio (which also adds an X.25 layer), and Async HDLC is the link layer for PPP over dialup.  You can even find (I think) serial controller chips that include hardware support for AHDLC (DMA, CRC generation and checking, byte stuffing, etc.)
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21657
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Serial protocol for filling an array from a PC: is there a standard protocol?
« Reply #10 on: September 09, 2014, 10:52:17 pm »
Depends on the application. A short RS232 link in a benign environment won't see a bit error in months. I once implemented a robust master to multi-slave protocol running over RS485. A PC application and USB to RS485 converter was used to test and configure the nodes. I didn't bother implementing any error recovery in the PC application because in days of operation flat out at 115k baud I didn't see one error that needed recovering.

You feelin' cozy enough with that to run it through EMC susceptibility?  ;)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2095
Re: Serial protocol for filling an array from a PC: is there a standard protocol?
« Reply #11 on: September 09, 2014, 11:37:08 pm »
It could miss problems like when the same bit is more than once wrong. I would use CRC8. It is faster, too, just a table lookup:

And 8 bit CRCs can miss problems when different bits are wrong. Random garbage will produce a valid 8 bit CRC 1 in 256 times the same as some simpler 8 bit checksum. There is nothing magic or strong about CRCs they are just a bit better at detecting short bursts of errors and have value if that is the kind of error you are most likely to get.

Depends on the application. A short RS232 link in a benign environment won't see a bit error in months. I once implemented a robust master to multi-slave protocol running over RS485. A PC application and USB to RS485 converter was used to test and configure the nodes. I didn't bother implementing any error recovery in the PC application because in days of operation flat out at 115k baud I didn't see one error that needed recovering.

You feelin' cozy enough with that to run it through EMC susceptibility?  ;)

I didn't need to. The protocol is robust. I didn't bother implementing any error recovery in a PC application talking to one node over a short cable used in a benign production test environment. An error would lock up the link and they would have to manually reconnect so it wouldn't be a disaster but it just doesn't happen. It is in production today and has been for more than 5 years. Like I said it depends on the application.
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3713
  • Country: us
Re: Serial protocol for filling an array from a PC: is there a standard protocol?
« Reply #12 on: September 10, 2014, 05:53:57 am »
A checksum is important if a garbled message could do damage, or if you have some way to request retransmission of data.  What is more important is to make your protocol self synchronizing.  If you are transmitting 16 bit words and don't use any framing characters you will have the problem that if you drop a byte or gain an extra (say when initially connecting) you will be forever misinterpreting your data -- until you reset everything.  The normal way to do this is to have a reserved beginning/end of message character, such as a newline in most text protocols.  When the decoder sees the special character it resets to a known state.  The downside is that if you are transmitting binary data you need to escape the end-of-message.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Serial protocol for filling an array from a PC: is there a standard protocol?
« Reply #13 on: September 10, 2014, 06:47:32 am »
Have a look at the first 32 characters of acsii. They are designed to do this.
http://www.ascii-code.com/
rts/cts or xon/xoff flow control is designed to not overflow receive buffers. As in "flow" control. I would recommend to enable this because an ATMega wouldn't be fast enough to process a 200 byte array reasonable baud rate* due to the missing hardware fifo.
A reasonable baud rate is the one where sending data from the atmega has the right interval of tx ready interrupts.

On problem with this... Your data must be ascii instead of binary, which reduces throughput.
Solution, hex code your data, so 0xDEADBEEF will be sent as stx"deadbeaf"etx.


« Last Edit: September 10, 2014, 06:59:19 am by Jeroen3 »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26891
  • Country: nl
    • NCT Developments
Re: Serial protocol for filling an array from a PC: is there a standard protocol?
« Reply #14 on: September 10, 2014, 03:19:09 pm »
Another idea: how about the X-modem protocol? With some modification it can be used to transfer a continuous stream of data.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf