Author Topic: TCP Client/server best practice  (Read 1751 times)

0 Members and 3 Guests are viewing this topic.

Offline Retirednerd2020Topic starter

  • Regular Contributor
  • *
  • Posts: 58
  • Country: us
TCP Client/server best practice
« on: March 27, 2024, 07:28:05 pm »
I am programming a small PLC to control a number (~5-6) devices on a wired ethernet LAN.  The PLC will be the (only) client.  The LAN is specifically set up for this purpose and no other traffic is expected on this network.  The application is simple.  Once every 5 seconds, poll the connected equipment for a small amount of data.  The data is <1k for each poll per device.  So, the traffic will be light.  Occasionally, maybe once every few hours, a small amount of set-up data is written to the equipment.  Probably less than 500 bytes or so.  Coordination between the connected equipment is not needed.  Timing is not very critical.  I can can have 8 connections at once with this PLC.  The system will not be expanded to more client devices.

I'm wondering which of these are best practice (and why).

A.  Connect to each, transfer the data, disconnect from each, repeat.  Repeat 5 seconds later.... for several days for a production run.

B. Connect to all of them forever (or at least for a period of a few days).  Then every 5 seconds, check the health of the connection, reconnect only if needed, transfer the data, repeat.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: TCP Client/server best practice
« Reply #1 on: March 27, 2024, 07:45:31 pm »
I'd go for B. Setting up a connection can be prone to memory leaks / dynamic resource allocation bugs.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4957
  • Country: si
Re: TCP Client/server best practice
« Reply #2 on: March 27, 2024, 07:52:00 pm »
Both ways are perfectly valid.

Connecting each time has the benefit of starting with a fresh slate in terms of state every time so in general it is less bug prone, but means more work for the TCP/IP stack and more latency to do the handshake every time...etc

Connecting once and keeping a connection open is more efficient since upon needing to send data you simply throw a packet out there, but if your protocol involves some negotiation and state there is more of a possibility to get into some bad state and not recover. So it is a good idea to detect lockups and freshly reconnect when things go wrong, but that might introduce extra complexity...etc

Most of the world wide web works on opening individual connections for each thing (Like every file required to show a website opens a new TCP connection). But things like chat programs typically hold on to a open connection. The differences start to matter more once you get to a bigger scale and need to service millions of clients, for just 8 anything works fine.
 

Offline Retirednerd2020Topic starter

  • Regular Contributor
  • *
  • Posts: 58
  • Country: us
Re: TCP Client/server best practice
« Reply #3 on: March 27, 2024, 09:42:55 pm »
Thanks!  I'll factor in your comments and decide.
 

Offline madires

  • Super Contributor
  • ***
  • Posts: 7767
  • Country: de
  • A qualified hobbyist ;)
Re: TCP Client/server best practice
« Reply #4 on: March 27, 2024, 10:30:15 pm »
Most of the world wide web works on opening individual connections for each thing (Like every file required to show a website opens a new TCP connection).

HTTP/2 supports request multiplexing over a single TCP connection since 2015. Unfortunately there are many late adopters despite a simple setup in most cases. For apache it's just loading a module plus setting a list of protocols.
 

Offline zilp

  • Regular Contributor
  • *
  • Posts: 206
  • Country: de
Re: TCP Client/server best practice
« Reply #5 on: March 27, 2024, 10:37:47 pm »
I am programming a small PLC to control a number (~5-6) devices on a wired ethernet LAN.  The PLC will be the (only) client.  The LAN is specifically set up for this purpose and no other traffic is expected on this network.  The application is simple.  Once every 5 seconds, poll the connected equipment for a small amount of data.  The data is <1k for each poll per device.  So, the traffic will be light.  Occasionally, maybe once every few hours, a small amount of set-up data is written to the equipment.  Probably less than 500 bytes or so.

Assuming that you control both ends: Forget TCP, just use UDP. Or even raw ethernet frames, if you don't need routing. < 1 kB is smaller than the minimum ethernet MTU (1500 bytes), so there really is no use for TCP with all the complexity of handling broken connections and stuff.
 

Offline zilp

  • Regular Contributor
  • *
  • Posts: 206
  • Country: de
Re: TCP Client/server best practice
« Reply #6 on: March 27, 2024, 10:43:24 pm »
Most of the world wide web works on opening individual connections for each thing (Like every file required to show a website opens a new TCP connection). But things like chat programs typically hold on to a open connection.

That couldn't be further from the truth. Connection: keep-alive is used pretty much universally with HTTP 1.1, HTTP/2 does multiplexing of multiple concurrent requests through a single TCP connection and is used quite a bit, and HTTP/3 uses QUIC and thus no TCP at all anymore, but also mutiplexes concurrent requests through a single flow.
 

Offline tridac

  • Regular Contributor
  • *
  • Posts: 115
  • Country: gb
Re: TCP Client/server best practice
« Reply #7 on: March 27, 2024, 10:52:57 pm »
To keep it simple, avoid any concept of ongoing state. That is, for every transaction, open a socket, connection, transfer the data, then close the connection. Standard C library has all the functionality to do that. Even simpler might be to use a connectionless protocol like UDP, rather than TCP. NTP has worked that way for decades without isssue and on a very lightly loaded network, it should be 100% reliable...
Test gear restoration, hardware and software projects...
 

Offline Retirednerd2020Topic starter

  • Regular Contributor
  • *
  • Posts: 58
  • Country: us
Re: TCP Client/server best practice
« Reply #8 on: March 27, 2024, 11:27:36 pm »
Again, thanks.  UDP isn't an option on the server equipment.  The server equipment are specialized and closed systems so it is TCP for me.  I think I will be be opening/closing each time.
 

Online shapirus

  • Super Contributor
  • ***
  • Posts: 1364
  • Country: ua
Re: TCP Client/server best practice
« Reply #9 on: March 27, 2024, 11:38:54 pm »
Again, thanks.  UDP isn't an option on the server equipment.  The server equipment are specialized and closed systems so it is TCP for me.  I think I will be be opening/closing each time.
In that case definitely go for single-use connections. Maintaining a reusable long-running connection is no easy thing to implement, and at the same time you don't need that.

UDP in your scenario would be the best, but second best is TCP connect->do business->disconnect. Within TCP, HTTP-compatible communication will likely be the easiest to implement, in particular, REST -- you'll find a lot of ready made libraries for that.
 

Online tellurium

  • Regular Contributor
  • *
  • Posts: 231
  • Country: ua
Re: TCP Client/server best practice
« Reply #10 on: March 28, 2024, 12:23:46 am »
I second those gentlemen who suggest to create new connections on every iteration.

Reason: making the system as stateless as possible. It is less complex, therefore good.

TLS is not used, and plain TCP is cheap.

Is the intention to make PLC a modbus client that periodically makes Modbus-TCP queries? Or is the protocol proprietary?
« Last Edit: March 28, 2024, 12:28:17 am by tellurium »
Open source embedded network library https://mongoose.ws
TCP/IP stack + TLS1.3 + HTTP/WebSocket/MQTT in a single file
 

Offline Retirednerd2020Topic starter

  • Regular Contributor
  • *
  • Posts: 58
  • Country: us
Re: TCP Client/server best practice
« Reply #11 on: March 28, 2024, 12:53:10 am »
The server devices are NOT modbus/TCP.  They expect simple strings via a TCP connection and respond with some data in the form of ascii strings.  The PLC must parse the strings and do string to value conversions and the like.
 

Online tellurium

  • Regular Contributor
  • *
  • Posts: 231
  • Country: ua
Re: TCP Client/server best practice
« Reply #12 on: March 28, 2024, 01:12:28 am »
The server devices are NOT modbus/TCP.  They expect simple strings via a TCP connection and respond with some data in the form of ascii strings.  The PLC must parse the strings and do string to value conversions and the like.

Understood, thank you.
And what is the MCU and Ethernet hardware on PLC and on devices, may I ask?
Open source embedded network library https://mongoose.ws
TCP/IP stack + TLS1.3 + HTTP/WebSocket/MQTT in a single file
 

Offline madires

  • Super Contributor
  • ***
  • Posts: 7767
  • Country: de
  • A qualified hobbyist ;)
Re: TCP Client/server best practice
« Reply #13 on: March 28, 2024, 11:57:15 am »
I wonder if SCTP wouldn't be the better choice for that type of application.
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: TCP Client/server best practice
« Reply #14 on: March 28, 2024, 03:38:56 pm »
SCTP is not well supported and fairly complex, much more complex than TCP.

TCP will work perfectly fine.  I would make a separate connection for each transaction, it seems simpler than managing long term connections. My rule of thumb is 1 second.  If I'm likely to send more data within 1 second it probably makes sense to implement the added complexity of keeping a socket alive.  That would be for a LAN, on long distance / high latency links I would pick a bigger number -- maybe 10 seconds.  This is just a rule of thumb and a pretty sloppy one at that, application specific circumstances may change that in either direction.

UDP would be fine in this situation assuming you are OK with in principle the possibility to lose messages.  On a dedicated LAN with low usage you should  basically never have packet loss, but if you have to implement acknowledgement and retransmission you are almost certainly  better off with TCP.   There also isn't much advantage over TCP assuming you have access to a good TCP stack for both sides.  On very small embedded platforms its often hard to get a good tcp stack and then UDP is a good choice.  There are several packets to setup and tear down connections and data packets trigger  acknowledgement packets.  But on a lightly loaded LAN this overhead is insignificant, while under heavy load TCP has received orders of magnitude more attention from network stack authors and hardware manufacturers so it tends to be higher performance out of the box than UDP in the real world. 
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8177
  • Country: fi
Re: TCP Client/server best practice
« Reply #15 on: March 28, 2024, 04:04:45 pm »
As others have expressed, both strategies are valid, and if you use non-broken TCP/IP stacks on both ends, both will work just fine: TCP is guaranteed to never lose a byte, so sending requests and then parsing until you know you have full response* will work indefinitely. On the other hand, opening and closing TCP socket is not that expensive operation at all, so no problem there either. If we were discussing linux or BSD stacks, there would be no problem: they are basically bug-free.

*) if the protocol makes it difficult to detect "full reply" condition, then strategy A would be simpler.

Now if you need to use some broken code, possibly in some closed black box system which you cannot fix, things get different: then it is well possible that either one of the proposed strategies is more likely to trigger a catastrophic bug, than the other. Which one, is then matter of luck (which kind of bug is triggered exactly); so prepare to implement both strategies. Maybe the internal state is corrupted every now and then; then fresh connection each time is better. Or maybe closing socket does not release all the resources, and memory (or some other resource) runs out after 1000 sockets. Then a single socket strategy would easily offer uptime of years, single-use sockets would run out in hours.

We are currently having pretty bad times with a (Western, well-known) brand hybrid solar inverter which implements MODBUS TCP, but they stop working and start timeouting when trying to open a socket, randomly after a few days to few weeks, so that end customers need to reboot them. There's not much we can do about it, except try to figure out if some specific use pattern we do is more likely to trigger the bug, plus discuss with the manufacturer and hope they can fix their thing, but then again our customers complain to us and our partners installing those inverters so we can't just say "oh, you bought a faulty inverter" after they made >10000EUR investment.

This hurdle has taught us stuff like MODBUS TCP has been a big mistake. Compared to e.g. RS485 based Modbus RTU, it's a huge security issue because in practice, it always will connect to the internet; the fallacy of "safe internal network" was not well understood in 1990's. And then, it adds a lot of extra complexity by requiring TCP/IP stack, which would be easy-peasy if the devices were general purpose computers with auto-upgradeable general purpose operating system (like linux or BSD or even Windows CE or something like that), but they are not, instead they run any random TCP/IP stack quickly cobbled together and tested in lab conditions. What's worst, embedded device manufacturers who are not exactly specialized in IoT (like solar inverter manufacturers) have no incentive to really fix bugs and offer firmware updates.
« Last Edit: March 28, 2024, 04:37:10 pm by Siwastaja »
 
The following users thanked this post: tellurium

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: TCP Client/server best practice
« Reply #16 on: March 28, 2024, 04:19:35 pm »
Also true, I have used an embedded system that leaked file descriptors when you opened and closed sockets, and only had 256 elements in the file descriptor table.  We had to implement aggressive connection reuse, count the number of connections made, and schedule a reboot when it got too high.  Other systems like those based on Wiznet chips have a very limited number of simultaneously open sockets, so if you keep the socket open you block anyone else trying to use it.  In this case if you let the sockets remain open and a connection drops without being shutdown cleanly due to a firewall timeout, power loss, network disconnect, or other disruptive event, the system can become unresponsive.  Here it's better to close sockets immediately so they can be reused, and also aggressively timeout sockets that hang before completing their transaction.  These are generally problems with very limited embedded systems especially when you don't have access to the software.

 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: TCP Client/server best practice
« Reply #17 on: March 28, 2024, 06:15:20 pm »
We are currently having pretty bad times with a (Western, well-known) brand hybrid solar inverter which implements MODBUS TCP, but they stop working and start timeouting when trying to open a socket, randomly after a few days to few weeks, so that end customers need to reboot them. There's not much we can do about it, except try to figure out if some specific use pattern we do is more likely to trigger the bug, plus discuss with the manufacturer and hope they can fix their thing, but then again our customers complain to us and our partners installing those inverters so we can't just say "oh, you bought a faulty inverter" after they made >10000EUR investment.
There is a chance that the TCP/IP stack is tweaked a little bit on the PLC which may make it somewhat incompatible with mainstream TCP/IP stacks. MODBUS is a PLC control protocol which is exchanging state updates continuously.  So having a long TCP timeout due to missing packets or whatever, causes the system to stop working and the PLC may use tricks to prevent that from happening. A couple of years ago I interfaced LWIP to a Schneider PLC using modbus TCP. The PLC was doing some funny business which LWIP didn't really like. It worked in the end (with help of how the PLC is programmed). The bottom line is that UDP would be a much better choice for Modbus compared to TCP/IP.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tridac

  • Regular Contributor
  • *
  • Posts: 115
  • Country: gb
Re: TCP Client/server best practice
« Reply #18 on: March 28, 2024, 06:46:28 pm »
I have some code that might help get you get started. I'm using the Prologix lan to hpib adapter (bought before they got expensive), for remote network access to HP and other test gear, such as voltmeters, counters etc. You won't need the prologix module layer, but the network layer module, network.c, handles all the low level open read, write close etc. For your application, the command line function netcmd.c, writes a command, gets a reply and prints to stdout. There's another,  netfil.c, similar, but logs the result string to a text file. The modules are self contained, with a configurable timeout for the network read function. Just needs gcc, gnu make for the build and has been built and working on FreeBSD, Linux and even cygwin for windows. Here's the project:

https://sourceforge.net/projects/network-instrument-workbench/
Test gear restoration, hardware and software projects...
 

Offline zilp

  • Regular Contributor
  • *
  • Posts: 206
  • Country: de
Re: TCP Client/server best practice
« Reply #19 on: March 28, 2024, 06:47:50 pm »
We are currently having pretty bad times with a (Western, well-known) brand hybrid solar inverter which implements MODBUS TCP, but they stop working and start timeouting when trying to open a socket, randomly after a few days to few weeks, so that end customers need to reboot them. There's not much we can do about it, except try to figure out if some specific use pattern we do is more likely to trigger the bug, plus discuss with the manufacturer and hope they can fix their thing, but then again our customers complain to us and our partners installing those inverters so we can't just say "oh, you bought a faulty inverter" after they made >10000EUR investment.

Well, obviously, those inverters are defective ... so, obviously, you should ask the seller to repair it (who ultimately should ask the manufacturer to repair it)?! I mean, that's a legal right and not just something you have to hope for ...
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8177
  • Country: fi
Re: TCP Client/server best practice
« Reply #20 on: March 28, 2024, 06:53:15 pm »
The bottom line is that UDP would be a much better choice for Modbus compared to TCP/IP.

I agree, because UDP is a packetized protocol, and MODBUS deals with "packets", which are short enough for UDP, and, in the good old modbus RTU, there is a possibility of missing packet anyway.

Though, I can understand why they chose TCP - it superficially seems like a direct match for the octet-delimited serial stream. Once you realize that on TCP, you can't do stuff like specifying exact timing to separate requests and responses and signify end-of-packet, like they did on RTU, so then they had to add stuff like extra packet length header, which works on TCP based on the assumption of bug-free implementations. But the protocol isn't direct match anymore, plus all the complexity of TCP needs to be handled somewhere, which is a burden in embedded systems, still in 2020's.

Yes, UDP would have been better, probably.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8177
  • Country: fi
Re: TCP Client/server best practice
« Reply #21 on: March 28, 2024, 07:02:27 pm »
We are currently having pretty bad times with a (Western, well-known) brand hybrid solar inverter which implements MODBUS TCP, but they stop working and start timeouting when trying to open a socket, randomly after a few days to few weeks, so that end customers need to reboot them. There's not much we can do about it, except try to figure out if some specific use pattern we do is more likely to trigger the bug, plus discuss with the manufacturer and hope they can fix their thing, but then again our customers complain to us and our partners installing those inverters so we can't just say "oh, you bought a faulty inverter" after they made >10000EUR investment.

Well, obviously, those inverters are defective ... so, obviously, you should ask the seller to repair it (who ultimately should ask the manufacturer to repair it)?! I mean, that's a legal right and not just something you have to hope for ...

It is easy to say this. In reality, it's a multi-billion corporation, with the related bureaucracy. When it comes to consumer protection laws, the one who sells it to the end customer is responsible. So the first one to take the hit is the sales company / installer, the one who is our direct customer because they install our products, too, and generate a lot of revenue for us! So do we tell the end customers: "demand a replacement [from our partner]". Then what - the inverters are otherwise working and being bought in hundreds - should our partner evaluate another brand, re-educate install staff to install those, and should we do the integration work for that new brand, and hope they don't have another bug? Or should they work with the manufacturer, acting as broken telephone?

So that is why we have to work simultaneously on two fronts, bypassing both the end-user and the seller (our customer) and communicating directly with the manufacturer (whose product we are interfacing with), and at the same time, try to work out if we can find any pattern in the failures. It could be a simple fix "don't do X, do Y instead" *, but finding it is indeed hard. But getting through to engineers in a large corporation isn't easy, either. Legal demands are guaranteed to not lead anywhere.

But whatever we choose to do, it will be waste of time and money for us anyway, which is why I advise: when interfacing with black boxes outside of your control, keep away from stuff like modbus TCP if you can, and for any problem, use the simplest interface you can find.

*) it could be very relevant to this thread: we are doing strategy B, maybe we should do A
« Last Edit: March 28, 2024, 07:05:07 pm by Siwastaja »
 
The following users thanked this post: tellurium

Offline Retirednerd2020Topic starter

  • Regular Contributor
  • *
  • Posts: 58
  • Country: us
Re: TCP Client/server best practice
« Reply #22 on: March 28, 2024, 07:12:53 pm »
Thanks for all of the considered input so far.  Tridac, thank you for the link to the code you suggest.  However, I have somewhat closed systems on both ends.  The PLC is an ABB Micro800 PLC  (Ladder diagrams, block diagrams and some script), and the servers are what they are.  So far, bench testing has been going well and I have no indication of incompatibilities.

So, I'll be testing out the chosen connect/transact/disconnect TCP method over the next few weeks.  I'll do a bit of a torture test and repeat the process with each server once a second continuously for a couple of weeks and see what happens.  If it can survive that then once per 5 seconds over a couple of days should be relatively safe.
 

Offline tridac

  • Regular Contributor
  • *
  • Posts: 115
  • Country: gb
Re: TCP Client/server best practice
« Reply #23 on: March 28, 2024, 07:17:29 pm »
If it's randomly rebooting, then one way to fix that might be to work out how long the system takes to reboot, then factor that into your design with retries and specified timeouts on packet receive. Standard unix network timeout can be minutes, but not difficult to apply timeouts in the code to get round that and make the system more resilient and deterministic Sometimes, you just have deal with what you have :-)...
Test gear restoration, hardware and software projects...
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: TCP Client/server best practice
« Reply #24 on: March 28, 2024, 07:24:00 pm »
We are currently having pretty bad times with a (Western, well-known) brand hybrid solar inverter which implements MODBUS TCP, but they stop working and start timeouting when trying to open a socket, randomly after a few days to few weeks, so that end customers need to reboot them. There's not much we can do about it, except try to figure out if some specific use pattern we do is more likely to trigger the bug, plus discuss with the manufacturer and hope they can fix their thing, but then again our customers complain to us and our partners installing those inverters so we can't just say "oh, you bought a faulty inverter" after they made >10000EUR investment.

Well, obviously, those inverters are defective ... so, obviously, you should ask the seller to repair it (who ultimately should ask the manufacturer to repair it)?! I mean, that's a legal right and not just something you have to hope for ...
That is much easier said than done. Manufacturer will claim it works for them and then it is up to you to prove a product isn't standard compliant. Now try and find a standard on how a TCP/IP stack should behave with all the crutches (like Nagle's algorithm) that have been added to TCP/IP over the decades.

A long time ago I have interfaced with super high reliability, non-blocking PABX systems which would crash when my system send an ISDN message it didn't expect. AFAIK that bug did not get fixed.
« Last Edit: March 28, 2024, 07:30:28 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf