Author Topic: Required protocols  (Read 3712 times)

0 Members and 1 Guest are viewing this topic.

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Required protocols
« on: October 25, 2023, 12:10:19 pm »
I'm currently trying to understand the necessary protocols for a server-client program for any given scenario, mainly asked for educational purposes.

In this scenario, we have a PC, a router, a microcontroller, an Ethernet module, and a temperature sensor.

Our goal is to send temperature data to a server using HTTP. The PC will be running the server program, while the microcontroller will run the client program. The temperature sensor operates on the I2C protocol, and suppose I've already written a client program that reads the temperature using I2C.

Based on my understanding, I believe that in addition to the I2C protocol, we also need to implement IP and TCP and HTTP protocols for communication.

Please let me know if this setup makes sense or if there are other protocols I should consider.

Thank you
« Last Edit: October 25, 2023, 12:12:03 pm by Mtech1 »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Required protocols
« Reply #1 on: October 25, 2023, 12:58:03 pm »
Do yourself a favour and use a Wiznet chip for the ethernet + TCP/IP communication part. HTTP sits on top of TCP/IP
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Mtech1

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #2 on: October 25, 2023, 01:14:56 pm »
Using raw UDP/IP instead of TCP/IP and HTTP between the server and the microcontroller for polling the sensor or sensors, saves you both resources and implementation effort.

UDP/IP makes most sense for polled sensors, because there is no connection per se, and each message (or datagram) is independent.  (The datagram header part contains both the sender and recipient IP addresses and ports, and these are accessible to both the microcontroller and the server-side program running under a fully-features operating system.)

UDP and IP stack implementation is much simpler than TCP/IP on the microcontroller.
Each UDP datagram must have at least one byte of data payload, because some implementations do not handle zero-data datagrams correctly.  In general, up to 556 bytes of UDP data (payload) can be supplied in each datagram without having to worry about IP fragmentation.

The downside of UDP/IP compared to TCP/IP is that there is no reliability guarantees, and routers may simply drop UDP datagrams if they're overburdened.  This rarely occurs in local area networks, though, and is easily handled by the server side re-sending the request if there is no response within a specified time limit, for example 15 seconds.  Again, for polled stateless sensors, UDP/IP just makes more sense.

The idea is that the datagram sent to the microcontroller is a query, perhaps identifying the types or sensors or units of measurement, and the datagram the microcontroller sends to the sender of the query is the corresponding response.

The content and the format of the query and response messages is something one should spend effort to design correctly, to allow expansion and for example multiple sensors and sensor types on a single microcontroller.

The above would therefore use UDP/IP for communications between the server and the microcontroller, and I2C or Dallas 1-wire for the communications between the microcontroller and the temperature sensor.

For cryptographic security, external chips like Microchip ATECC608B (cost less than USD $1 even in singles at vendors like Mouser and Digikey) can be used to implement proper encryption with the key (for example, one half of a key pair) stored on the chip.

As to the server software running under a fully-featured OS, it would make more sense for it to poll a configured set of sensors, archiving their readings, and exposing access to the archived (and current) data to a FastCGI/WSGI/etc. application serving HTTP requests under a HTTP(S) server like Apache or Nginx.
A common option is to use an SQL database to store the sensor readings.  Then, the sensor server and the HTTP application would use SQL to a database service to access the sensor readings, and the application would use FastCGI/WSGI/etc. to talk to the HTTP(S) server (although this detail is often well implemented in libraries), with the HTTP(S) server configured to handle TLS security and authentication and access controls.  Those are the protocols needed.  Additional formats and languages are needed by the application to present the data as graphs: HTML for the page itself with CSS for fine-tuning visuals, and Javascript (if the graphs are generated on the client side as recommended since even phones have ample capabilities for this) or SVG (for generating precise vector-graphic scalable graphs on the server side or embedded in the HTML itself).  Such applications are easiest to write in Python, PHP, or Ruby.
« Last Edit: October 25, 2023, 01:17:22 pm by Nominal Animal »
 
The following users thanked this post: Mtech1

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #3 on: October 25, 2023, 01:59:42 pm »
Sure, I will consider the chip that you suggested, but at this time, I'm just trying to understand the basics that are needed before diving into any coding
 

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #4 on: October 25, 2023, 02:02:12 pm »
Thank you for the detailed explanation. Just to confirm, if we have multiple clients, each with their unique IP addresses, and they want to to a server,  Do we need IP protocol or include TCP. Is that the correct understanding?
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Required protocols
« Reply #5 on: October 25, 2023, 02:26:43 pm »
You need to understand what kind of server and what protocol is needed. Go talk to the people who are going to write the software which runs on the server. It depends on their skill set what kind of protocol is best to use in terms of engineering time & abilities of people involved.

Likely the easiest is to implement HTTP get requests that convey the information to a webserver. If this information needs to go over internet or an otherwise insecure network, then the least intrusive way is to use off-the-shelve VPN boxes that create an encrypted pipe for the communication.

But first: try to find overlap between the skill sets of the people who are going to work on this project versus the project's requirements (don't forget to ask about security requirements).

« Last Edit: October 25, 2023, 02:46:32 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Mtech1

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8173
  • Country: fi
Re: Required protocols
« Reply #6 on: October 25, 2023, 03:00:35 pm »
Yes, coordinate it well with the server people. Sensible options include raw UDP datagrams, raw TCP socket, or MQTT over TCP if you think you benefit from the existing MQTT tools or TLS authentication offered by the MQTT broker.
 

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #7 on: October 25, 2023, 03:08:46 pm »
You need to understand what kind of server and what protocol is needed. Go talk to the people who are going to write the software which runs on the server. It depends on their skill set what kind of protocol is best to use in terms of engineering time & abilities of people involved.

let's considering security aside for now
 assume the server supports TCP/IP, WebSocket, MQTT, and HTTP. 


If you follow the bottom-to-top approach, and since you already have code to read temperature using I2C  which sequences of the protocols would you  implement ? I think IP, TCP and HTTP would sequence needs to implement to achieve goal.

 If my understanding is correct and I can ask next doubt
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Required protocols
« Reply #8 on: October 25, 2023, 03:19:32 pm »
If your end goal is to make HTTP requests to a server you'll need to have the following networking protocols implemented:

Base ethernet protocols:
- IPv4 and/or IPv6
- ARP

Then your transport & management protocols:
- TCP/IP
- UDP
- ICMP

Then high level protocols like:
- DNS
- DHCP (if you want to support dynamic IP addresses)
- HTTP client

However, do not try to add security as an afterthought. You can't add security afterwards as good security will very likely affect the way you have to implement your entire system. Security is not just encrypting data but also having a means to limit access to a system and detecting somebody is tampering with the system.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #9 on: October 25, 2023, 03:59:59 pm »
if we have multiple clients, each with their unique IP addresses, and they want to to a server,  Do we need IP protocol or include TCP.
With TCP/IP or UDP/IP, you can have multiple servers and multiple clients.

I do believe that HTTP is the wrong protocol to interface to the sensor microcontroller, however.  WebSocket I could accept, if the intent is to have any user interface directly to the sensor via their browser.

With raw UDP datagrams, the high-level loop in the microcontroller waits for an UDP datagram to arrive.  When one does, it checks the data payload to see if it is a valid request, and if it is, copies the source address and port from that datagram to a new, outgoing UDP datagram, and fills its data with the sensor reading(s), and sends it. With unsecured communications (on an assumed protected LAN), this is the simplest to implement.

When an external crypto support chip like ATECC608B is added –– I mention this, because you can find lots of examples of how to use one in the Arduino environment; some boards even include it (or the previous model, ATECC508) by default –– you'll want to add queues, so that the crypto chip can work while the microcontroller is doing something useful.  If you draw a state machine diagram, separating things like "request decryption", "request verification", "response construction", "response encryption", and "response sending", you'll see how you can keep both the MCU and the crypto IC working at the same time, whenever there is more than one UDP datagram pending.

The difference between TCP and UDP –– TCP/IP meaning TCP connection over IP networking, and UDP/IP meaning UDP datagrams over IP network –– is that UDP is a connectionless protocol, throwing datagrams around that may be dropped along the way during congestion; whereas TCP is a connection-oriented protocol that involves handshaking and transparently handling retransmissions, so that it looks like a full-duplex serial connection to the programmer.
A TCP/IP stack requires a lot more resources than an UDP/IP stack does, but there things like the Wiznet chips nctnico mentioned that implement almost all of that internally, and your microcontroller only handles the datagrams for UDP, or new connections and data and closing connections for TCP.  Because cryptography involves the data only, really, these ethernet chips are "fully compatible" with crypto chips, because the two operations are really quite separate.
 
The following users thanked this post: Mtech1

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8173
  • Country: fi
Re: Required protocols
« Reply #10 on: October 25, 2023, 04:09:18 pm »
Two high-level design choices which affect everything else:

1) Need security or not? Decide it early on, instead of trying to add security as afterthought as nctnico says. Security is partially choosing and understanding correct libraries, ciphers and configuring them, but also a full process which goes deep into manufacturing, how keys and certificates are managed and so on.

2) Need direct connections to the device, or does everything always go through server? If the end users (and yourself, in admin mode etc.) are fine doing everything through server, then you can make the device-server interface really, REALLY simple (e.g., raw UDP packets or raw TCP socket with simple binary protocol, preferably packed C structs just to annoy nctnico ;)) and leave the API / UI / UX concerns completely on the server/client pair which is totally separate from the microcontroller and the server-MCU interface.

On the other hand, if your users want to access the device "directly", then you have to listen to your users and implement more stuff on the microcontroller side.
 
The following users thanked this post: Mtech1

Online tellurium

  • Regular Contributor
  • *
  • Posts: 229
  • Country: ua
Re: Required protocols
« Reply #11 on: October 25, 2023, 04:18:59 pm »
OP,

Consider https://github.com/cesanta/mongoose/
It does what you want.

Wiznet example for SAMD21 (which you can repeat on pretty much any micro) https://mongoose.ws/documentation/tutorials/arduino/w5500-http/
Also, MQTT counterpart: https://github.com/cesanta/mongoose/tree/master/examples/arduino/w5500-mqtt

Disclaimer - I work for the company that develops it.
Open source embedded network library https://mongoose.ws
TCP/IP stack + TLS1.3 + HTTP/WebSocket/MQTT in a single file
 

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #12 on: October 25, 2023, 04:30:01 pm »
I see it's important to decide TCP or UDP for any application

From a bottom-to-top approach, we should select TCP or UDP.

I have one use case in my mind. Consider we want to develop a system that should control home lights, TV, fan, from Android  APP as well as send live video to a remote server.

What would be your preferred choice, TCP or UDP?
 
The following users thanked this post: gpr

Offline HwAoRrDk

  • Super Contributor
  • ***
  • Posts: 1478
  • Country: gb
Re: Required protocols
« Reply #13 on: October 25, 2023, 04:31:22 pm »
Likely the easiest is to implement HTTP get requests that convey the information to a webserver.

I would suggest not using GET requests, but instead use POST requests. That is the proper HTTP method for submitting data to the server.

Then your transport & management protocols:
- TCP/IP
- UDP
- ICMP

Then high level protocols like:
- DNS
- DHCP (if you want to support dynamic IP addresses)
- HTTP client

Some supplementary notes with regard to what nctnico said:

ICMP: While not strictly required for the scenario where you are simply sending data to a local server on the same LAN, support for this is advisable if you want to be able to handle exceptional conditions when communicating with other networks (i.e. through a router). For example, if a router doesn't support the MTU size you're sending, and the IP packet has the Don't Fragment (DF) flag set, it'll send you a Fragmentation Needed ICMP message. There are also other things like a router sending Destination Unreachable if it can't forward your packets. ICMP messages are also used for 'ping' and 'traceroute' diagnostics, which you may want to implement as troubleshooting features.

DNS: If you're only ever going to address the destination server by IP address, then this is obviously unnecessary. But if you plan to address the server by hostname (e.g. 'foo.bar.local'), then you need to implement DNS name resolution via a nameserver.

One thing I would suggest is to study and become familiar with the OSI network layer model. You'll find things like IP referred to as 'layer 3', TCP as 'layer 4', or HTTP as 'layer 7'; the layer model is what the numbers are referring to.
« Last Edit: October 25, 2023, 04:33:37 pm by HwAoRrDk »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #14 on: October 25, 2023, 04:53:51 pm »
Here is an example diagram of how one might use a relatively low-powered microcontroller with an external crypto chip like ATECC608B and external TCP/IP and/or UDP/IP stack as implemented by e.g. some Wiznet chips, collected to an SQL database on a server where a web server-side application under a HTTP server like Apache or Nginx provides a view to the sensor readings to human users via browsers:
               Microcontroller                                    Server
     ┌───────────────────────────────┐         ┌────────────────────────────────────────────┐
     
     sensor ───A─── MCU ───C─── Wiznet ═══D═══ sensorservice    serverapp ───F─── HTTP server ═══G═══ Web clients
                     │                                    │      │
                     B                                    E      E
                     │                                    │      │
                   Crypto                               SQL Database
  • A: Typically I²C or Dallas 1-wire to a temperature sensor; SPI bus can also be used.
  • B: ATECC608B uses I²C or a single-wire connection to a microcontroller.
  • C: Typically SPI is used with Wiznet chips, although I suppose UART and I²C variants also exist.
  • D: Typically TCP/IP or UDP/IP.
  • E: SQL connection, handled typically by an interface library to hide the details.
  • F: Typically FastCGI (or for Python, WSGI).  Details often handled by a standard library interface.
  • G: Typically TLS-encrypted HTTP over TCP/IP.  Encryption can be omitted, but is not recommended, as current browsers can even complain.  Virtual private servers can use Lets Encrypt certificates; local and development environments can use self-signed certificates you import directly to the browser.
Note that there can be many servers, clients, and microcontrollers on the same network, with many concurrent D and G connections.

Similarly, a single MCU may have a number of different sensors; typical I²C and Dallas 1-wire buses typically support several sensors on the same chain (but for I²C, you do need sensors whose addresses you can modify so that each address on the same bus is unique).  There are also I²C expanders like TCA9548A that allow you to split a single I²C port on an MCU to 8 separate ones (of which only one is active at the same time), allowing you to connect up to 8 sensors having the same I²C address.  (Of course, TCA9548A and similar chips can also be chained, giving you more alternate buses.)

Others have suggested other alternatives; I find that good.  There is no single right way of doing this.  The one I've described is the minimal one I can think of without compromises, that's all.  Using TCP/IP and TLS, and even existing libraries like Mongoose et al. can be a better starting point in practical terms; I only ask you to remember that those are not the minimum requirement, just perhaps an easier or more robust starting point.
(There are too many people claiming you need X to do Y because Z all over, without really understanding the exact requirements, nor how they translate to hardware.  I'm sure somebody will barge in to this discussion and claim that ARM processors and SBCs are not sufficient for servers, since they cannot run Windows, or something.)

In particular, there are also cheap small SBCs with Ethernet and I²C buses on GPIOs supported in Linux you could use, by simply connecting the temperature sensor(s) to the SBC directly, and it acting as the server.  A Rock Pi S, for example, has a nice Rockchip RK3308 SoC and three I²C and two SPI buses, so you could easily run a small Apache/Nginx installation even with the server application directly polling the sensors.  You wouldn't need a microcontroller at all, then.
« Last Edit: October 25, 2023, 04:57:07 pm by Nominal Animal »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Required protocols
« Reply #15 on: October 25, 2023, 05:24:15 pm »
I do believe that HTTP is the wrong protocol to interface to the sensor microcontroller
The problem is that anything else will need some kind of dedicated protocol implementation. From my experience with these kind of projects: web developers are a dime-a-dozen where finding somebody able to implement a dedicated UDP or TCP/IP protocol will be much harder (in addition to defining a protocol). At the microcontroller side it doesn't add / remove much in terms of complexity whether you implement an HTTP get request or a proprietary protocol.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8173
  • Country: fi
Re: Required protocols
« Reply #16 on: October 25, 2023, 05:32:59 pm »
I do believe that HTTP is the wrong protocol to interface to the sensor microcontroller
The problem is that anything else will need some kind of dedicated protocol implementation.

Wat? Last time I looked, HTTP did absolute nothing to simplify or standardize sensor networking. You have to implement "some kind of dedicated protocol" on top of HTTP, plus do everything else as required by HTTP.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Required protocols
« Reply #17 on: October 25, 2023, 06:05:04 pm »
I do believe that HTTP is the wrong protocol to interface to the sensor microcontroller
The problem is that anything else will need some kind of dedicated protocol implementation.

Wat? Last time I looked, HTTP did absolute nothing to simplify or standardize sensor networking. You have to implement "some kind of dedicated protocol" on top of HTTP, plus do everything else as required by HTTP.
With HTTP you can do a get request like htttp://1.2.3.4/my_sensor.php?value=12336
In the PHP script the value is available in a variable and can be put into a database for retrieval. It doesn't come much more standarised or simpler.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8173
  • Country: fi
Re: Required protocols
« Reply #18 on: October 25, 2023, 06:09:28 pm »
I do believe that HTTP is the wrong protocol to interface to the sensor microcontroller
The problem is that anything else will need some kind of dedicated protocol implementation.

Wat? Last time I looked, HTTP did absolute nothing to simplify or standardize sensor networking. You have to implement "some kind of dedicated protocol" on top of HTTP, plus do everything else as required by HTTP.
With HTTP you can do a get request like htttp://1.2.3.4/my_sensor.php?value=12336
In the PHP script the value is available in a variable and can be put into a database for retrieval. It doesn't come much more standarised or simpler.

Definitely useful. I thought we were discussing the interface between server and MCU, my bad. This is why, as I said, the OP needs to decide whether or not it is required/useful to directly communicate human<->MCU, or if it is always human<->server<->device. In latter case, device does not need to implement HTTP but can be much simpler, say something as simple as just transmitting the sensor value every five seconds to a predefined UDP port of the server, or keep a raw TCP socket open and output comma-separated decimal numbers (this time I won't suggest packed C struct).
« Last Edit: October 25, 2023, 06:11:26 pm by Siwastaja »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Required protocols
« Reply #19 on: October 25, 2023, 07:54:56 pm »
I do believe that HTTP is the wrong protocol to interface to the sensor microcontroller
The problem is that anything else will need some kind of dedicated protocol implementation.

Wat? Last time I looked, HTTP did absolute nothing to simplify or standardize sensor networking. You have to implement "some kind of dedicated protocol" on top of HTTP, plus do everything else as required by HTTP.
With HTTP you can do a get request like htttp://1.2.3.4/my_sensor.php?value=12336
In the PHP script the value is available in a variable and can be put into a database for retrieval. It doesn't come much more standarised or simpler.

Definitely useful. I thought we were discussing the interface between server and MCU, my bad. This is why, as I said, the OP needs to decide whether or not it is required/useful to directly communicate human<->MCU, or if it is always human<->server<->device. In latter case, device does not need to implement HTTP but can be much simpler, say something as simple as just transmitting the sensor value every five seconds to a predefined UDP port of the server, or keep a raw TCP socket open and output comma-separated decimal numbers (this time I won't suggest packed C struct).
No, the device (MCU) makes http requests to the server. Almost every tcp/ip stack for MCUs I've seen comes with an HTTP client example which shows how to implement this. https://docs.wiznet.io/Product/Open-Source-Hardware/http_client

Again, if you are going to create your own proprietary UDP or whatever protocol, you are adding unnecessary complexity for the people implementing the server side. Also think about tooling & debugging for that protocol. You can use any browser to test the server side by making GET requests manually (or scripted if you like using command line tools). The webserver will logs the requests nicely for you (if configured to do so) so you can see all the requests and look for problems (manually or automated through scripting). If you choose POST requests, then the browser likes to see some kind of form to POST. This form then needs to be updated as well when the fields change. Using a GET request avoids all this extra work.
« Last Edit: October 25, 2023, 07:58:18 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline abeyer

  • Frequent Contributor
  • **
  • Posts: 292
  • Country: us
Re: Required protocols
« Reply #20 on: October 25, 2023, 08:19:52 pm »
With HTTP you can do a get request like htttp://1.2.3.4/my_sensor.php?value=12336
In the PHP script the value is available in a variable and can be put into a database for retrieval. It doesn't come much more standarised or simpler.

This will almost certainly drop and/or duplicate data points under some circumstances. Use the POST, Luke.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Required protocols
« Reply #21 on: October 25, 2023, 08:43:21 pm »
With HTTP you can do a get request like htttp://1.2.3.4/my_sensor.php?value=12336
In the PHP script the value is available in a variable and can be put into a database for retrieval. It doesn't come much more standarised or simpler.

This will almost certainly drop and/or duplicate data points under some circumstances. Use the POST, Luke.
A simple sequence number does wonders.
« Last Edit: October 25, 2023, 08:45:18 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online tellurium

  • Regular Contributor
  • *
  • Posts: 229
  • Country: ua
Re: Required protocols
« Reply #22 on: October 26, 2023, 10:20:13 am »
I just can't get rid of the feeling that this is just another one AI bot discussion
Open source embedded network library https://mongoose.ws
TCP/IP stack + TLS1.3 + HTTP/WebSocket/MQTT in a single file
 
The following users thanked this post: Siwastaja

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8173
  • Country: fi
Re: Required protocols
« Reply #23 on: October 26, 2023, 12:24:35 pm »
I would strongly recommend using XML over HTTP, with XML parser hand-written in assembly for the target microcontroller. Only this way you get acceptable performance. Incorporating the PPP protocol into the XML payload, using base64 encoding for endianness swap increases throughput and data security. Remember though that TLS1.3 is outdated; use SSL2.0 instead.
 
The following users thanked this post: tellurium

Offline krho

  • Regular Contributor
  • *
  • Posts: 223
  • Country: si
Re: Required protocols
« Reply #24 on: October 26, 2023, 12:52:47 pm »
I would strongly recommend using XML over HTTP, with XML parser hand-written in assembly for the target microcontroller. Only this way you get acceptable performance. Incorporating the PPP protocol into the XML payload, using base64 encoding for endianness swap increases throughput and data security. Remember though that TLS1.3 is outdated; use SSL2.0 instead.
:popcorn: :-DD

Another vote to use MQTT
« Last Edit: October 27, 2023, 03:26:12 am by krho »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #25 on: October 26, 2023, 01:12:38 pm »
Regardless of OP's intent, the discussion itself has merit.

There is a fundamental difference between suggested connectionless approaches: polling versus pushing.
(For connection-oriented approaches the same question is which side, sensor microcontroller or a server, initiates the connection.)

I prefer polling, because (even with UDP/IP) it allows multiple servers to maintain an archive of the readings of the same sensor.  If the microcontroller IP stack implements ICMP, UDP/IP, and DHCP (which is based on UDP/IP) client-side for IP address discovery, there is also no configuration necessary on the microcontroller side (assuming each IP stack has a unique MAC (=ethernet) address, as they should); the sensor is essentially controlled separately by each server.  Using dnsmasq or dnscache as a DHCP server suffices, and lets one name each sensor microcontroller locally with a stable, predefined domain name, based on the MAC address.

Yes, this does mean that you need a server between the sensors and human users.  I prefer that, because then I can use simple firewall rules on the LAN edge to completely drop access to the sensors outside the local LAN.  For such servers, I prefer tiny SBCs in the class of aforementioned Rock Pi S, Orange Pi Zero LTS, and so on, running a minimal Debian-based OS.  They draw very little power, and dedicating each for a small range of purposes ("IoT server/gateway") keeps their configuration simple.  This also opens up several security enhancing techniques, like remote logging to a machine not connectible from the LAN itself (other than for the logging stream); useful if you suspect tampering with the local machines.  For simple sensors, you can even omit encryption, or keep the responses in plaintext and only require a salted hash of a shared secret in the request, for simple access control.

I personally might use a dedicated SBC server to maintain an archive of the sensor readings, but also allow a "dashboard" view from an outwards facing server with access protected using a password for remote monitoring: consider e.g. a summer house you wish to keep an eye on during the winter, or the temperature and moisture of your tool shed.  With proper firewalling, viewing the archived readings from within the same or designated LAN(s), there would be no need to protect say temperature sensor data behind a password.

Setting something like this up with the 'push' approach (where the sensor microcontroller makes connections to log its readings, instead of responding to update requests), requires quite a lot of configuration on the microcontroller end.  I find that approach cumbersome to maintain, especially if you have more than one or two of such microcontrollers.  Keeping the sensor microcontroller identification on the DHCP service (using dnsmasq or dnscache or similar; no need to use bind or similar resource hogs), and the archiving configuration on the server doing the archiving, is easier to maintain effectively, and scales much better with increasing number of sensors and servers.
 

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #26 on: October 26, 2023, 02:59:30 pm »
I got a bit lost when you started discussing higher-level protocols like HTTP and MQTT. Before that, I had a few doubts because we're following a bottom-to-top approach. It's clear to me that we need to implement I2C first and then move on to TCP/IP or UDP/IP.

In the client code, how do you test whether your TCP/IP or UDP/IP code is working?

 Do you typically implement a ping communication, or are there other tests you perform to check its functionality?
 

Offline HwAoRrDk

  • Super Contributor
  • ***
  • Posts: 1478
  • Country: gb
Re: Required protocols
« Reply #27 on: October 26, 2023, 04:16:12 pm »
I would strongly recommend using XML over HTTP, with XML parser hand-written in assembly for the target microcontroller. Only this way you get acceptable performance. Incorporating the PPP protocol into the XML payload, using base64 encoding for endianness swap increases throughput and data security. Remember though that TLS1.3 is outdated; use SSL2.0 instead.

Also, don't forget to implement RFC1149! :-DD
 
The following users thanked this post: woofy, abeyer

Offline abeyer

  • Frequent Contributor
  • **
  • Posts: 292
  • Country: us
Re: Required protocols
« Reply #28 on: October 26, 2023, 05:57:12 pm »
I would strongly recommend using XML over HTTP, with XML parser hand-written in assembly for the target microcontroller. Only this way you get acceptable performance. Incorporating the PPP protocol into the XML payload, using base64 encoding for endianness swap increases throughput and data security. Remember though that TLS1.3 is outdated; use SSL2.0 instead.

Hey, do you work for my ISP!?
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #29 on: October 26, 2023, 06:00:04 pm »
In the client code, how do you test whether your TCP/IP or UDP/IP code is working?
I do unit testing.  I implement variants of the software to isolate each aspect and test that separately, and won't start integration before those work separately.  That way, I can detect whether a bug is caused by the feature implementing code, or by interaction with other code.  (Consider, for example, when sensor updates are non-atomic, and the response constructor copies the value in mid-update, leading to garbage response.)

If you start implementing the entire final firmware, you will create bugs that are hard to find, because you don't know where they might lurk.  Writing isolated test cases –– and of course using compiler warnings and such to their fullest extent! –– lets me exclude huge swatches of code, and lets me investigate smaller, less complex system to find the cause of the bug.

Do you typically implement a ping communication, or are there other tests you perform to check its functionality?
Echo test (responding with the same or trivially modified datagram) is useful, yes.

Let's assume we start with a functioning IP stack.  I would first write and test a firmware that implements UDP sending and receiving, for example some variant of the echo test.  If not provided by the IP stack already, I'd then create and test a different firmware that implements a simple DHCP client.  (Many do, though.)  Next, I'd combine the two in a new firmware, and test that (both DHCP client functionality against a standard DHCP server like dnsmasq or dnscache, and UDP echoing using that said IP address): i.e., at bootup, it acquires an IP address via DHCP, and responds/echoes to requests.  (A trick at this point is using very short DHCP leases, say five minutes, to ensure DHCP leases are renewed correctly.  Even IP address change can be forced at that point, which ensures the IP stack works correctly in a dynamic network environment.)
Similarly, for the sensor data acquisition, I'd write a separate test firmware, before combining to the "main" development trunk.

In general, as I develop exclusively on Linux, it is easy to verify the communications are correct.  Tools like Wireshark (which works on all major OSes) allow you to examine the data transferred directly, but I more often use just tcpdump instead.  Tools like netcat (nc), originating from BSDs, can be used to send and datagrams or make and maintain bidirectional TCP connections from the command line without writing any code (well, other than Bash command-line commands).  Basically, I can use existing tools on the server side for development, and easily verify the communications work, before writing any of the server-side code, and concentrate on getting the client, the microcontroller side correct.

I'm most comfortable on the server-side code myself; I've written various stuff there professionally, and feel at home there.

For the microcontroller side, I'm just a hobbyist, and prefer an approach of engineering stuff thoroughly, spending whatever time I need.  Thus, instead of needing debuggers and such, I tend to write and compare separate implementations either showing or not showing the same problem, and deconstruct it until I understand exactly what went wrong, and how to fix it, or how to workaround the problem if it is not in something under fully my own control.
 
The following users thanked this post: 2N3055, SiliconWizard, Mtech1

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #30 on: October 27, 2023, 11:17:41 am »
Here is an example diagram of how one might use a relatively low-powered microcontroller with an external crypto chip like ATECC608B
I haven't specified the particular hardware like Raspberry Pi, Arduino, or ESP because my primary goal is to grasp the general process first. I've assumed we have a server program running on a PC, and someone has provided the program.

On the client side, there's a home automation system, which includes a microcontroller connected to a Bluetooth module, Wi-Fi module, Ethernet module, and a camera. The lights are controlled through the GPIO pins of the microcontroller via a relay driver board, and the home automation system communicates with the Wi-Fi router at my home. I believe this hardware setup is sufficient for understanding the process between server and client.

As a developer, we have multiple options, and we need to choose  one choice. For instance, I've opted for Wi-Fi connectivity among other options like Ethernet and Bluetooth. I've chosen TCP-IP over UDP-IP. Furthermore, I'm considering  Web sockets, I had many options like HTTP, MQTT and many  for communication.

I acknowledge that TCP and WebSocket might not be the most optimal choices for this requirement, but I've chosen them due to my personal interest in exploring these options. As other suggested , Following your suggestion, I plan to add ARP and DHCP to the list.

Now, I've set up what I would call a 'Local Network,' where my router, mobile phone, and the home automation system are interconnected, allowing them to exchange data within this local network.

Server is on different network and home automation are on the different network However, I'm currently trying to understanding the protocols that establish the connection between the server and my home automation system. It seems that TCP/IP establishes this connection, with WebSocket likely used to transmit data, including commands to turn on lights and record video. I am not sure so need someone's confirmation ?

In addition, I'm confused about the communication between the home automation system and the  router, both of which support Wi-Fi. I'm uncertain about the specific protocols involved and whether they utilize TCP/IP for this purpose.?

These are the two points where I'm looking more clarification from you side guys

 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3374
  • Country: ua
Re: Required protocols
« Reply #31 on: October 27, 2023, 01:18:14 pm »
I think the most easy way is to use some ethernet to serial TTL module.
 

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #32 on: October 27, 2023, 01:28:59 pm »
I think the most easy way is to use some ethernet to serial TTL module.
Thank you
In the discussion, my primary focus is on understanding the protocols used between the server and clients. I'm more interested  to gain a clear understanding of how communication happens between server and clients
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #33 on: October 27, 2023, 02:51:04 pm »
Here is an example diagram of how one might use a relatively low-powered microcontroller with an external crypto chip like ATECC608B
I haven't specified the particular hardware like Raspberry Pi, Arduino, or ESP because my primary goal is to grasp the general process first.
Sure, that makes sense.  Note how my diagram was just an example –– I could draw probably half a dozen alternate ones depending on the hardware used ––, intended to help with the discussion, rather than a firm suggestion.

On the client side, there's a home automation system, which includes a microcontroller connected to a Bluetooth module, Wi-Fi module, Ethernet module, and a camera. The lights are controlled through the GPIO pins of the microcontroller via a relay driver board, and the home automation system communicates with the Wi-Fi router at my home. I believe this hardware setup is sufficient for understanding the process between server and client.
The camera puts additional restraints on your design choices, though, because of the large amount of data they produce.
As an example, consider Omnivision OV5640 five-megapixel camera modules.  There are variants suitable for some microcontrollers (that can provide say a still JPEG image to your microcontroller), but more commonly, using two-lane CSI for use with various single-board computers.  (The camera module itself is supported by the Linux kernel, so it's a matter of whether the SBC hardware supports two-lane CSI or not.)

(Many 32-bit microcontrollers do support for example PSRAM for extending their address space, so that one could handle an uncompressed 5MP image in RAM. For example, my Teensy 4.1 supports two cheap PSRAM chips, giving me an extra 16M (16,777,216 bytes) of directly addressible RAM.  The issue is that with e.g. OV5640, the alternate to CSI is a 10-bit parallel bus, which on Teensy 4.1 requires the use of FlexIO, limiting the pins one can use.  Simply put, it is NOT just a matter of "I shall use this MCU and this camera module", even if they seem to have sufficient pins and memory and capabilities.)

However, for the purposes of understanding the communications between the various pieces, let's assume the camera produces still images or a compressed video stream and you have enough RAM on the microcontroller to handle that.

I acknowledge that TCP and WebSocket might not be the most optimal choices for this requirement, but I've chosen them due to my personal interest in exploring these options. As other suggested , Following your suggestion, I plan to add ARP and DHCP to the list.
Sure, I make such choices all the time with my own projects (developed for my own needs, as opposed to for others): it is a good thing, and keeps your motivation high, too.

I'll add a quick recap of ARP, ICMP, and DHCP after the horizontal line below.

Now, I've set up what I would call a 'Local Network,' where my router, mobile phone, and the home automation system are interconnected, allowing them to exchange data within this local network.

I'm currently trying to understanding the protocols that establish the connection between the server and my home automation system. It seems that TCP/IP establishes this connection, with WebSocket likely used to transmit data, including commands to turn on lights and record video. I am not sure so need someone's confirmation ?
Yes, that's basically how it happens.

TCP over IP is your transport protocol for Ethernet and Wifi.  (The difference is the underlying transport below IP: for wired Ethernet it is IEEE 802.3 (also called 'ethernet'), and for WiFi it is one of the other IEEE 802 protocols.  The other IEEE 802 protocols are designed to work seamlessly with IEEE 802.3, so for us application/device/appliance developers, we don't actually need to care.)

Some Wiznet modules and most WiFi modules you can use with microcontrollers do implement a full IP stack, and can handle the IP over Ethernet protocol details (ARP, ICMP) or support for ARP/ICMP over WiFi, provide support for both TCP and UDP protocols, and even implement a simple DHCP client.  Thus, you normally only need to interface to their IP stack via the received and sent datagrams (UDP) or data streams (TCP) and possibly state changes (obtained IP address, lost network connectivity, failed to authenticate WiFi connection, et cetera), and don't need to worry what the underlying transport is.

Bluetooth uses its own set of protocols, with standard types for some use cases (like USB has USB Serial, USB Audio, and USB Video, that do not require device-specific drivers at all).  I haven't used Bluetooth much myself.

In any case, WebSocket, HTTP, and MQTT are protocols, ways to format the data so that the other end correctly interpretes them.  They are almost always used for formatting data sent and received over TCP/IP, but are not intrinsically tied to TCP, just to the reliability and ordering guarantees TCP gives.

You do not need to use the same TCP/IP connection for everything.  TCP/IP connections are identified by four things: Source IP address, source port, target IP address, and target port.  Port is a number between 1 and 65535, inclusive, typically used to identify the service.  IANA assigns port numbers (see here), but basically you can freely let your users choose the port for the microcontroller.  Typically the source port number is assigned by the operating system or IP stack, so for TCP/IP, do not require any specific source port, only the microcontroller target port, to identify the service or connection type desired, when the server initiates the connection.
When the microcontroller initiates the connection, only the server address and target port really matter.

WebSockets and HTTP typically use port 80 for unencrypted connections, and 443 for TLS-encrypted connections.  MQTT default unencrypted port is 1883, and TLS-encrypted port is 8883.  But these are just common conventions, and you can choose for yourself.  You could use say port 15190 for the camera, port 7045 for lighting control, and so on, regardless of which protocol each uses; it is up to you.

A serious design question is whether the server makes the connection to the microcontroller, or the microcontroller to the server.  Both have their benefits and downsides, and I already outlined some above; myself preferring the server-initiated approach because of the reasons I outlined.  The client initiating the connection may be more common and preferred by others: I can imagine several cases where that would be better, just not this particular one.  In the client-initiated connections case, the ports used on the microcontroller do not matter, of course, but then you need to configure where the microcontroller should connect –– and if you want to use hostnames instead of IP addresses and ports, you'll need to add DNS protocol (client) support to the list.

In addition, I'm confused about the communication between the home automation system and the  router, both of which support Wi-Fi. I'm uncertain about the specific protocols involved and whether they utilize TCP/IP for this purpose.?
While underlying WiFi transport is one of the IEEE 802 protocols (depending on the WiFi type), they are all compatible with Ethernet, so that from a programmer's perspective it is no different to wired Ethernet connections at all.

The one exception is when the microcontroller or device uses a WiFi module directly: then, the WiFi stack needs to know the name of the access point (the WiFi router to be used), and a password/passphrase or equivalent (depending on the WiFi security model used), and note when there is a WiFi connection established to said access point.  (Depending on the WiFi implementation, this can involve some commands to be sent to the WiFi stack at specific stages, or it can do all of it automatically.)  Other than that, communication over WiFi, including using DHCP to obtain an IP address, is done just like when using wired Ethernet.



When using Ethernet, WiFi, or Bluetooth, we deal with data in packets.  The hardware handles the on-wire format (possibly with some help from the IP stack for details).

For Ethernet (IEEE 802.3), the packet starts with 8 fixed bytes: 7 bytes of preamble, and a start frame byte.  This is followed by the Ethernet header, consisting of the destination MAC address (6 bytes), source MAC address (6 bytes), optionally a 4-byte VLAN tag (IEEE 802.1Q, identifying the virtual LAN the packet is part of), a 2-byte length (big-endian byte order), 46 to 1500 bytes forming an IP packet, and a 4-byte Ethernet checksum (again in big-endian byte order).
Most IP stacks strip out the Ethernet frame, giving you just the IP packet payload; and take just the IP packet, constructing the Ethernet packet around it.

For WiFi, the packet is similar, and for communications across the access point, also contain destination and source MAC addresses.  Again, most WiFi stacks strip out and construct this frame themselves, so typically we work with only the contained IP packets.  (There are additional packet types for negotiating the connection, of course.)

ARP is the protocol that is used to find out the MAC addresses (hh:hh:hh:hh:hh:hh) of machines in the same local network; the IP address alone does not suffice.  When the target IP address is not in the local network, the MAC address of the gateway (switch or router) is used instead.  Thus, each device has a limited size ARP cache, which maps IP addresses to 6-byte MAC addresses, and vice versa.  This should be internal to the IP stack you use, and not something you normally need to handle yourself.

The next step is obtaining an IP address, netmask (identifying the bits in IP addresses that must be same for the address to be within the local network), and gateway IP address (router connected to internet).  These can be configured statically, obtained via DHCP protocol using UDP packets, or link-local address autoconfiguration may be used.  (The last one is simply picking a random address in the 169.254.0.0/16 IPv4 block or e80::/10 IPv6 block, with no gateway so only local network comms are possible; and using ARP to verify nobody has picked that IP address yet, and retrying until an unused IP address within that block is obtained.)

DHCP protocol consists of UDP packets (so no connection per se).  There are essentially four types of packets used.  First, the device sends a discover packet using source IPv4 address 0.0.0.0 port 68 to target broadcast address 255.255.255.255 port 67.  The server responds to the same MAC address with an offer, targeted to the offered IPv4 address port 67.  The client is then supposed to do an ARP request to find out if any devices on the local network already have that offer, and not accept it if they do, but this normally only occurs when there are more than one DHCP server on the same local network.  When a suitable offer has been received, or the client remembers it has a DHCP lease that should still be valid, the client then sends a request packet, again using IPv4 source 0.0.0.0:68 to 255.255.255.255:67.  If the DHCP server grants this request, it responds with an acknowledgement packet, which identifies the IPv4 address to be used, gateway (the netmask is not really needed, as ARP will tell which IP addresses are accessible on the local network, and which need to be directed to the gateway), lease time in seconds (how long this address grant is valid), and addresses of DNS servers the device can query for mapping host names into IP addresses.

DHCP over IPv6 is similar, but has some different options, and of course the addresses are 128 bits long (instead of 32 as in IPv4).

When host names and not IP addresses are used, the mapping is done by a DNS server, often with just simple UDP queries (DNS over UDP), although TCP and QUIC, and even TLS-encrypted UDP and DNS-over-HTTPS can be used with some servers.  Again, this is only needed if host names instead of IP addresses are used.  Also, if a query produces more than one matching result –– for example, a server may have more than one valid address –– you are supposed to connect to them in a round-robin manner, and not just always hammer the first one even if it does not answer.

If you have a DNS server or cache (like dnsmasq, dnscache, or even bind) for your local network under your own control, you can use the top-level domain .local for your local network.  Such name queries are reserved for the local network.  Thus, using names like master.bedroom.local or temperature.local or whatever.you.want.local are perfectly allowed within your local network; and, if your computers/tablets/etc. are configured to use that DNS server while connected to the local network (requiring only the DHCP server configuration to point to this DNS server), you can use those names in your browser, too, even when external internet is also available.

ICMP is a protocol used at the IP network level.  When you send a message outside the local network via the gateway, you may receive an ICMP message telling you the recipient is unavailable, for example.  The IP stack should implement this transparently for you in most cases.  Dropped packets are not notified about to the sender, but the unavailable notification (and other error conditions) is useful for early detection that a TCP connection cannot be made.  Otherwise, it would take the TCP response timeout to fire before it would be noticed, and that timeout can be quite long (minutes instead of seconds, typically, to deal with temporary connection hiccups).
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #34 on: October 27, 2023, 03:38:42 pm »
At the application level, ones own code should monitor whether there is a link established (cable connected for Ethernet, configured access point connected and authorized for WiFi), and a valid IP address (not in use by anyone else by ARP cache, and/or valid IP lease from a DHCP server).
Trying to send or waiting to receive anything when either of those is missing is futile; plus, the human user would probably like to know.
(I like using small displays instead of LEDs for this, normally turned off, and only showing the state and statistics when a button is pressed, with an inactivity timeout that turns them off in a minute or two.  For small I²C-bus OLED displays, use the standard board and footprint, as they do age and dim, and may need to be replaced in a couple of years if heavily used.)

The microcontroller firmware may need to implement calls or commands to the IP stack to do the DHCP queries when using DHCP and so on, but these depend on the exact module or IP stack used; but, they should be well documented with plenty of examples.  (If not, use something else!)

This leaves the application protocols: your own protocol using raw UDP datagrams or a TCP connection, WebSocket, HTTP, MQTT; and whether encryption (TLS or something else) is used.  You can use one protocol and one IP port for all, or split the features provided by the microcontroller into separate ports.
Normally, these are dictated by the existing server software you want to use if that decision has already been made.

While we've used "client" for the microcontroller and "server" for the computer or SBC running a fully-featured OS like Linux or BSD or Mac or Windows, another design question is which one initiates the connections, or sends the UDP datagrams asking for updates on the status.
You could even use raw UDP, unencrypted, for the video stream from a camera, especially if you dedicate a specific port for it, with the same port sending the data on the microcontroller also receiving control messages (start, stop, image controls).  TCP for such a video stream may require significant buffers on the microcontroller, because each packet is kept in memory until reception has been acknowledged by the other end, and at video bandwidths, that can be quite a lot.

If the design is instead changed to use a Single-Board Computer for the camera, you essentially merge the microcontroller to the SBC.  Suitable ones are not expensive, and Raspberry Pis are not the only option (although camera selection varies); there are Rock Pi S, Orange Pi Zero LTS (and many other models), Banana Pi Zero (and many other models), NanoPi and other models, especially NanoPi Duo2: $34 with a 5MP camera module from FriendlyElec, 55mm by 25.4mm, WiFi + Bluetooth, runs ARM Debian/Ubuntu, USB2 and USB3 and Ethernet are available on pins, only requiring a connector (MagJack for Ethernet; $12 for the carrier board with both USBs and Ethernet), with a MicroSD card for OS and storage.  Despite their size (especially for NanoPi Duo2), they do run a full Linux installation, and work exactly like a Linux server (no display [well, composite output only!], console/UART and remote access).  (You'll also need an USB 5V 2A power supply, and a 3.3V UART to USB adapter, and a MicroSD card.)
Note: the above are only examples, intended to show you Raspberry Pis are not the obvious choice.  I personally don't use Raspberries, because their hardware USB issues and the Foundation's behaviour annoys be to no end.  Better spread the love around, so that we'll have plenty of choices available in the future!

With an SBC running Linux, instead of the server using IP-based protocols to talk to the microcontroller to access the cameras, the peripherals are accessed using Linux interfaces and libraries, so the project transforms to a Linux programming task.  Many Linux SBCs even include H.264 and H.265 compression hardware, so one can stream an efficiently compressed HD video stream from them without totally tying up the CPU cores.
(Do note that the above referred to OV5640 is limited to 15 frames per second, and 720p@30fps.  Linux SBCs can use most USB webcams, though, including all that use UVC aka USB Video Class.)

As you can see, considering the complexity level or type of approach, can limit or open up the set of options you have.  Camera interfaces are a particularly tricky one.  Many 32-bit ARM microcontrollers do have USB host interfaces, but writing an UVC driver to obtain video from those can be a huge task; that is one of the things a Linux-based SBC gives you "built-in".  Many Bluetooth implementations support audio streams by default, so for audio stuff, one might prefer that over Ethernet/WiFi.  (Synchronizing audio streams over Ethernet/WiFi is quite tricky, but if it is just playing audio from some source, that is usually not required, and a smallish buffer to ensure no gaps in the played audio, using TCP acknowledgements to limit the rate of incoming audio data, or your own UDP protocol, is not that difficult.  Synchronizing playback of an incoming audio stream to a separate device playing corresponding video, however, can be annoying to the extreme; not something I'd like to do.)
 
The following users thanked this post: Mtech1

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #35 on: October 27, 2023, 05:09:06 pm »
considering the complexity level or type of approach, can limit or open up the set of options you have.


Nominal Animal, your detailed explanation of networking and protocols is greatly appreciated. Now, I'm interested in understanding your approach to software development process.

Consider a hobby project: we have a garden 50 meters  away from home, and manual plant watering has become burdensome. We aim to automate this process based on soil moisture, weather, and time as well as allowing remote video access. We should able to supply water to plant and monitor from anywhere.

We're determined to use only available workshop parts: a laptop, mobile, Wi-Fi router, LAN cable Cortex M series processor with Ethernet, moisture sensor, RTC1307 to keep track of real time and date, temperature and humidity sensor, and a relay OPTO board

Feel free to choose your preferred components, such as a Cortex M series processor, specific Ethernet module, camera, temperature and humidity sensor, and moisture sensor.

In my understanding, once you've chosen the processor, the next step would be to select software  an integrated development environment (IDE) that supports your particular processor. For instance, if you opt for the Cortex M3, you would look for an IDE compatible with that processor.

Don't  worry about selected hardware. With the hardware ready, I'm interested to see your step-by-step process for software development. I'm not requesting code, just a clear overview of your approach. I am really sorry if the question is too vague
« Last Edit: October 27, 2023, 10:12:08 pm by Mtech1 »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #36 on: October 27, 2023, 09:41:32 pm »
Apologies to everyone for the overly long wall of text that follows.  In my defense, I was asked.  No dissertation nor drafts were involved.

Consider a hobby project: we have a garden 50 meters  away from home, and manual plant watering has become burdensome. We aim to automate this process based on soil moisture, weather, and time as well as allowing remote video access. We should able to supply water to plant and monitor from anywhere.
Yup, very similar to my own hobby projects.

Note, however, that others will approach this differently, and my own approach is very highly biased by my own preferences.

It would be useful and informative if other members here would describe their own approach.
The comparison, and understanding the reasons for the differences, should be very useful and informative to anyone reading this thread and pondering on these same questions.

We're determined to use only available workshop parts: a laptop, mobile, Wi-Fi router, LAN cable, Cortex M series processor with Ethernet, moisture sensor, RTC1307 to keep track of real time and date, temperature and humidity sensor, and a relay OPTO board.
The first step would be of course to look at exactly what I can use.  The video is the most annoying part, because most microcontroller dev boards just don't have the hardware to handle video (a CSI interface and enough memory), and it is better suited for SBCs.  (Which is why I personally have a few different cheap SBCs at hand for such projects.)

The second is power supply and antennas.  If we use Wi-Fi, we may need directional antenna (which be can build ourselves from some connectors, copper wire, and an aluminium soda can!) and more or less line-of-sight (wood is mostly see-through, concrete is iffy, and anything metal or conductive will block), but we'll still need to provide power to the things near the garden.  Since the data bandwidth is low, we could use standard 100Base-T ethernet, which only uses two of the four twisted pairs of a standard LAN cable; the two others we can use for providing voltage.  This is often called "passive POE", but it isn't real Power Over Ethernet, it is just using the unused pairs for power delivery.  Basically, only two of the pairs and the shield is connected at either end to the Ethernet connector (specifically, pins 1 & 2 and 3 & 6), and the other two pairs used for voltage supply (pins 4 & 5 connected to positive supply, 7 & 8 to negative/zero/supply return).  There will be some loss in the cable, so you'll need a regulator (best would be a switchmode converter with wide input) at the garden end.  (If you look at e.g. eBay, you can find these by searching for "passive poe injector splitter".  They include no electronics, just bring out the lines in an easier to use form.)

I do have a Teensy 4.1 (Cortex-M7; uses NXP i.MX RT1062 microcontroller).  I could use that, if I also had a POE-compatible MagJack connector, like Abracon ARJM11**-104 or ARMJ11**-114 (cost about 3€ in singles at Mouser, Digikey, LCSC); normal MagJacks may burn if you supply power via the unused pairs in 10/100.  However, I do not have any camera modules compatible with it, and buying an OV5640 camera module would cost almost as much as a SBC+camera module.

Some ESP32's do have camera support (and obviously WiFi), but they're not Cortex-M (but 32-bit Xtensa or RISC-V based).  None of the ESP32's I currently have have cameras, and I haven't used them with cameras, so I don't know if one can stream video (or just snapshot images) using one.

Thus, the first major branching point for me is whether video (or frequently updated images) is required or not.

If not, then an additional option would be to have an Ethernet- or WiFi capable microcontroller (even ESP32) in the house, with an RS-485 driver board, and use RS-485 to the garden.  You'd still need to provide a couple of extra wires for (low) power, but you could also split the sensors under more than one microcontroller, each microcontroller with their own RS-485 port.

These choices, couched in non-technical terms, I would put to the Master Gardener (in my case, my elderly mom).  In fact, she'd be happy to drop the video, if she could stream audio and have notifications when her tablet or phone rings; she always has a radio blaring somewhere.  Often two, on different channels, on either side of the house. ::)

The way I would proceed forward from that, would depend highly on the choices made.  I might order some additional parts (I wouldn't spend more than $50 on this project!), and work on other parts of the system.  My preference is to tackle the parts I am the least sure about first, so that if my understanding of their requirements was incorrect, I can adjust the overall plan before I've wasted too much effort.  Thus, no, we cannot really drop the hardware side from consideration when considering how the software development side would proceed.

In my understanding, once you've chosen the processor, the next step would be to select software an integrated development environment (IDE) that supports your particular processor. For instance, if you opt for the Cortex M3, you would look for an IDE compatible with that processor.
Well, not really, because in any case I would be using either ARM GCC or LLVM/Clang toolchain –– they support all Cortex-M variants, and I'll only need the CMSIS or equivalent board support package, and are basically compatible with each other.  For Teensy, I'd be using Teensyduino under the Arduino environment, although it is possible to write code for it on "bare metal" (without supporting libraries, with just the microcontroller-specific header files containing macros and declarations needed to access its peripherals, plus the linker script specifying its address spaces, and some way to upload new firmware to it; Teensies having a dedicated bootloader chip making them always programmable via standard USB 1.1 or 2.0 port).

In general, I don't like IDEs, and prefer a simple text editor with syntax highlighting, and make and Makefiles for project build instructions.  If I want, I can always run something like 'inotifywait -m -e CLOSE_WRITE source-file.c make' in another window to automagically re-build the project whenever I save the file, regardless of which editor I use.

I do like to organize my projects into directories/folders, with a dedicated directory for my test cases and utilities run on the host/development computer, and a dedicated directory for each of my test firmware forks.  I've done this so long I no longer need to think about it, it's automatic; but for anyone new to this, I would warmly recommend using git source control instead.  (If you do prefer IDEs, pick one that supports git integration.)
Moreover, git allows you to easily collaborate with others, and of course github and gitlab.

As I already mentioned, I use GNU make as my core build tool; see here for its manual.  It uses a plain text Makefile to control how a project is compiled and linked.  Note, however, that it uses TAB and not space for indentation.  (I often just run sed -e 's|^[\t ][\t ]*|\t|' -i Makefile to convert any indentation to a single space, but my text editor (currently Pluma) distinguishes tabs and spaces anyway: tabs show up as a special arrow as wide as the tab is.)

Simply put, I am a strong proponent of the Unix philosophy, and prefer to combine simple tools (like inotifywait from inotify-tools package, and make from the make package, both available in all Linux distributions) to perform tasks and solve problems; I find IDEs confining and restricted, because they impose a specific workflow on me.

Others disagree, of course, and point out how an IDE can help you cross-reference your sources and so on, showing the declaration of your function when you are focused on a place in the code where that function is used.  Such features are useful, but I just don't need them right now.
With an IDE, you do need to pay attention to and consider how the project itself is built: does it use a Makefile behind the scenes, or do you need that IDE to build the project.  If you do need that IDE, do you need a specific version, or is any version okay.

Documentation is also extremely important.  Comments in the code should describe your intent: what and why the code should achieve what it does, and not what the code does.  We can read the code, so we can tell what it does, but without proper and up to date comments, we don't know if what the code does is what the programmer intended!

For math and formal stuff, I like to use LibreOffice Writer and Math, and save both the original OpenOffice format files and PDF versions to the project documentation directory (original OpenOffice files in a dedicated subdirectory).  For diagrams, I prefer the SVG format.  I often have data generators and sequence generators written in Bash, awk, or Python; I put these usually in an utils/ subdirectory within the project.

(For example, when testing link bandwidth, for example how much data can I send from Teensy 4 using USB Serial to a program running on a Linux machine, I often use Xorshift64* pseudorandom number generator to generate "random" data.  It uses a single 64-bit nonzero unsigned integer as its seed and state, and is extremely fast.  For best randomness, I may only use the high 32 bits of each result.  As long as both ends agree on the seed value that starts the sequence, the receiver can simply generate a copy of the same sequence, and compare if each value matches.  FWIW, Teensy 4 can do over 25 MB/s or 200 Mbit/s this way over USB Serial, with the Linux tty layer becoming the bottleneck.)

For my design choices, I often write plain text files (saving them as README.txt) describing my reasoning.  I don't use shorthand or jargon here, because I won't remember what I meant in a year if I do.  For this project, one paragraph could be "Mom doesn't want video, but wants audio to garden.  Mono 16bit 44.1kHz suffices, we tested." or "Can't get POE-compatible MagJacks, but I found some old 4-wire twisted pair cable I can use for RS-485.  Red wire is +V from supply (19V for now), white is 0V, blue is noninverting signal and green inverting signal for RS-485."
(Those are just examples I invented, to show you what kind of things I'd write to record the reasons for design choices.  I often even keep open questions there!)

The very first thing I write is usually a variant of the Blink, blinking some LED on the microcontroller board.  I use that to ensures my toolchain and basic build machinery is in working order.

From there on, I proceed with one feature at a time, creating a separate firmware or fork for each feature initially, implement it with some way of testing it works.  I usually postpone the integration back to the main tree until I have the features sketched out –– that is, I always verify it works correctly before any integration work; and I always include all error checks and security considerations because if I don't, I know from experience they will never ever get implemented at all ––, so that I have a better picture of the details and see what I need to worry about when combining them to the root design.  Even then, I integrate one feature at a time, to limit the amount of code I need to examine when problems appear.  And they do: even the best programmers make many errors, they just learn from them and correct them before they release the code for testing.

I go so far as to even write temporary firmwares to build ad-hoc testing devices.  In one case, a friend of mine wanted to add support for a certain vinyl cutter to their program running on Linux, so I had them record the USB data stream the Windows software used to communicate with it.  (It used HPGL for the cutting, and some binary-ish stuff for setting up the mode, sheet size, and so on.)  I used one of my Teensies to emulate the vinyl cutter, down to Vendor:Device, with approximately realistic delays, and only proceeding if the data it received matched the recorded patterns.  After the details were discovered, I wrote some example Python/awk code to test whether specific example jobs (like a rectangle one inch off the borders) were correct.  It worked out, and afterwards, I wiped and returned that Teensy to my tool set.

I do have lingering issues from repeated burnout, similar to a writer's block, which is why I recommend you ask the same question from other members here.  Especially those who do production work, where time spent is an important metric. (It's irrelevant to myself; and I've always been "slow" or "perfectionist" or "too meticulous"!)
 
The following users thanked this post: Mtech1

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Required protocols
« Reply #37 on: October 28, 2023, 12:35:47 pm »
Don't  worry about selected hardware. With the hardware ready, I'm interested to see your step-by-step process for software development. I'm not requesting code, just a clear overview of your approach. I am really sorry if the question is too vague
You need to turn this question around: what are your abilities in terms of programming, how much time do you have and what do you want to learn? Start from there and choose the path that is closest to your goals. Your goal can be anything from trying to hack something together in 1 afternoon to a multi-year project that looks good on your resume.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #38 on: October 28, 2023, 02:32:14 pm »
Yup, very similar to my own hobby projects.

Have you developed a similar system in the past, Could you please share what you have developed ? I've created a hypothetical scenario to better grasp the software development process between the server and client, particularly to understand the use of multiple protocols

Thus, the first major branching point for me is whether video (or frequently updated images) is required or not.
I understand that adding a camera can increase the complexity of the project. Therefore, I'm considering supplying water to plants by monitoring their condition using moisture sensors and incorporating weather predictions instead of relying on a camera

  I might order some additional parts (I wouldn't spend more than $50 on this project!), Thus, no, we cannot really drop the hardware side from consideration when considering how the software development side would proceed.
We won't be pursuing the development parts of this project in the real world. So, I believe assuming the selected components won't cause any harm further.

In general, I don't like IDEs, and prefer a simple text editor with syntax highlighting,
It's great to see that you're not using an IDE. This approach can provide a valuable advantage in understanding the overall development process, rather than being tied to a specific IDE.

I proceed with one feature at a time, creating a separate firmware or fork for each feature initially, implement it with some way of testing it works.
I want to bring your attention to why I'm not discussing any specific hardware board; my primary concern is the software. Let's assume we have a custom board equipped with a microcontroller capable of communicating with Wi-Fi, Bluetooth, and Ethernet devices. For the software development process, I'd start by selecting an ARM microcontroller, an Ethernet module, and sensors of your choice , which will help us understand the protocols they use for further development

Initially, I can write software program that monitoring plant conditions and activating a water supply. The real challenge arises when we want to control this remotely from anywhere in the world. In our discussions, I've realized the need to establish a server-client connection. This would require writing software that adheres to the TCP/IP protocol.

Once the client can connect to the server and exchange data packets, we can proceed to more advanced tasks. This includes implementing a program that follows the WebSocket protocol, alongside necessary security protocols. While this is a short overview, i might be missed so many things between development process of server and client.

It would be useful and informative if other members here would describe their own approach. I recommend you ask the same question from other members here

I had hoped for more responses from the members, but I understand it depends on their personal interest and time availability. it's possible that others may have lost interest because In a thread there are multiple questions. We can't force them to help, but we can request. I believe the discussion in this thread will be beneficial for newcomers.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #39 on: October 28, 2023, 05:37:04 pm »
Yup, very similar to my own hobby projects.
Have you developed a similar system in the past
I like to interface my microcontrollers to all sorts of sensors (no moisture sensors yet) and display modules, whatever I happen to need for a moment, with the microcontroller connected to a SBC or a router via USB.  They're mostly just experiments, where I check if an idea actually works.  I did also participate in a project to build a fusor, and experimented with various types of Pt100 temperature sensor measurements and automation.

However, my elderly mom is an avid gardener in a very rural area, with a couple of veggie patches and a small greenhouse, and I have considered creating something similar for her.  No camera, but some sensors, and like I said, maybe an outdoors speaker for her to blast radio with, that will also alert her if her phone rings.  (She always forgets it indoors.)

Thus, consider my own projects experiments, not completed systems like the one you describe.

I understand that adding a camera can increase the complexity of the project. Therefore, I'm considering supplying water to plants by monitoring their condition using moisture sensors and incorporating weather predictions instead of relying on a camera
The next question is, how widely do the sensors need to be spread around.  If the sensors can be placed within a couple of meters of a central microcontroller in the garden, then they can be directly connected using I²C or Dallas 1-wire.  Otherwise, a more robust local "bus" for data and power to the sensors is needed.

If the sensors can be grouped within a couple of meters of the microcontroller, I would use a microcontroller with Ethernet, using "passive POE" (i.e. two pairs of standard Ethernet cable for 10/100 ethernet, the other two for power delivery).

If the sensors must be spread out, then I'd use several microcontrollers and RS-485, with an ethernet-capable microcontroller as the RS-485 to Ethernet bridge.  (My WiFi routers all run OpenWRT, so I could also use an USB RS-485 dongle on a router, with a simple Linux server program to expose the RS-485 bus on the network.)

This is because outdoors with rain and moisture and atmospheric electricity is a bit harsh environment for communications buses, and RS-485 is designed for these.  It would push the complexity up, since each sensor or group of sensors would need their own microcontroller.  Shielded cabling (with a braided or foil shield around the insulated conductors within the cable) allows a bit longer I²C and 1-wire buses, though.

In general, I don't like IDEs, and prefer a simple text editor with syntax highlighting,
It's great to see that you're not using an IDE. This approach can provide a valuable advantage in understanding the overall development process, rather than being tied to a specific IDE.

Yes.  In general, I do prefer to use free/open source tools.  ARM themselves help support gcc and clang toolchains for Cortex-M development, so it is more a question of getting the peripheral programming information from the MCU vendor, and the basic connectivity information (pinouts, power sequencing) from the board vendor.

In addition to the toolchain and programming support files, one needs some way to upload the created firmware to the board.  I pick boards that work with free/open source tools here too.

When useful, one can even use dedicated virtual machines (under free hypervisors like VirtualBox, KVM, etc.) for development.

For the software development process, I'd start by selecting an ARM microcontroller, an Ethernet module, and sensors of your choice , which will help us understand the protocols they use for further development
I approach this phase completely differently: I do not make any firm decision before I've explored the details.  I do individual tests –– yes, even test firmwares; most of my projects are these! –– to find out if a feature would work, before deciding the hardware collection.

I don't do "here are some components, do something with these".  I consider these as tools I can use to solve a problem or fulfill a need, and just cannot wrap my mind around "this is the tool I want to use; now, what could I use it for?"

In this hypothetical project, I would create a test firmware to read and measure the temperature and moisture of a potted plant.
I would probably go as far as get a pot of soil, and put it in the fridge or outside in the cold to see how the measurements change.
I would test the Ethernet connectivity and powering via the extra pairs, before I decide whether it is what I will use.

In other words, I write quite a bit of code in the design phase, regardless of whether I will use any of it in the actual project itself.
Conversely, my design phase takes a long time, because when I've arrived at a design I'm confident with, the actual project implementation phase is more about integration of the previously separately tested parts, and testing them.  I spend quite a lot of time testing unexpected inputs, lost connections, and so on; I just hate it when my tools simply crash.  (I hate it more when my tools silently break my data because an error occurred that the tool ignored, though.)

Initially, I can write software program that monitoring plant conditions and activating a water supply. The real challenge arises when we want to control this remotely from anywhere in the world. In our discussions, I've realized the need to establish a server-client connection. This would require writing software that adheres to the TCP/IP protocol.

Once the client can connect to the server and exchange data packets, we can proceed to more advanced tasks. This includes implementing a program that follows the WebSocket protocol, alongside necessary security protocols.
Right, except that if TLS is used, it is implemented on top of TCP/IP, and only then the WebSocket or other stuff on top of that.
(We have a stack of layers: IEEE802.* (Ethernet/WiFi), then IP, then TCP, then TLS if used, and then the application protocol.)

Let's say we have completed the project up to a house-facing Ethernet connection.  (We have a wall power supply powering the sensors, and by testing the microcontroller on the other end of that Ethernet connection we know we have the sensors correctly interfaced.)

The next step is exposing the microcontroller to the outside world, either just the local network via anyones browser, or anywhere in the world that has the necessary access.

(To do this, I prefer to use a server-side application –– a "server" or "service" –– to manage the sensors and archive their data; and another, running under HTTP server like Apache or Nginx, to expose that data as web pages.  However, this preference is just my personal one.)

Your choice of WebSocket means that the microcontroller itself responds to WebSocket queries.  You do need a HTML page with some JavaScript to do the WebSocket queries to the microcontroller on behalf of the human user, but those files can even be local files on their users' computers.  Of course, having them on a server accessible from both the local network and the internet, would make things easier.  The JavaScript code in the HTML page causes the browser to do the WebSocket queries providing the sensor data, and additional JavaScript draws and lists the received data in the browser.

There are a few different ways of protecting the WebSocket from attacks and unauthorized access.  Using TLS (i.e., encrypting the WebSocket connections) is one, with access limited by a salted hash of a passphrase or time-dependent secret in the request URI.  Another is to use a firewall to limit access to the WebSocket IP address to only machines within the local network, with access from internet using an encrypted tunnel (SSH, OpenVPN, etc.).

Routers running OpenWRT can provide such tunnels, and you can use DynDNS and similar services so you can use a stable name (even when your internet provider gives a different IP address to the router), and when LuCI (OpenWRT web interface) is used, it already has a suitable HTTP server for the HTML and Javascript.  So, using an OpenWRT-based router for the local network would also make things easier.

Without a suitable router, the smallest SBC with Ethernet that runs Linux can be used as the server (for both local network and external connections), and only consumes a few watts of power maximum (powered from an USB wall wart).  However, when you do have such an SBC, it could be moved to the garden, and serve stuff from there, eliminating several microcontrollers and components...  See what I mean about not picking the hardware first, and instead testing (including programming and experimenting) and considering all parts and then the entire system before finally deciding which hardware to use?  ;)

I had hoped for more responses from the members
Yeah, multiple questions in the same thread is not the best option.. and they might perceive the question in reply #35 was directed at me only.

I try to avoid that kind of perceptions by replacing "your" with something like "your –– and other members' here as well! ––", i.e. "Now, I'm interested in understanding your –– and other members' here as well! –– approach to software development process.", or perhaps "Now, I'm interested in understanding your (plural, y'all!) approach to software development process."

Don't worry about it overmuch, though.  I fail English all the time.  It happens; we just try to learn and proceed.

In general, the question of "How do you develop software for a microcontroller-based system with a web interface?" would be too broad here.  A specific system like you outlined is a good start for understanding –– and you basically do need to re-ask the same question about many different systems to understand the variations and differences; just don't ask them all at once! ––, but as you can see, the choice of the hardware and software depends on each other.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #40 on: October 28, 2023, 07:00:36 pm »
Perhaps this helps:

If I were to create a garden monitoring system (with or without a camera), I'd use Ethernet and "passive POE" (10/100 ethernet using two twisted pairs, with the other two twisted pairs providing 20-50V DC voltage) to power a Linux SBC server located at the garden.

That SBC would be connected to at least one microcontroller via USB Serial or UART, with the sensors connected to the microcontrollers.
All of them would be powered at 5V or 3.3V by a switchmode DC-DC supply drawing its power from the "passive POE".
(If using UART, I would use ISO6721 digital isolator or TXU0102-Q1 level shifter, and some TVS diodes or ESD suppressor, to protect the SBC from static discharges and such.)

First, I'd write test firmwares for each sensor type, and test that I can read them correctly, and that the sensors work.  The microcontroller would be connected to my development laptop (powered via USB, connecting using USB Serial, using an USB UART dongle if the microcontroller does not have USB).  It would be controlled by the serial connection on my laptop, so no protocol per se.  I might even use Arduino for this part.

When I have the collection of sensors and microcontrollers I need, I'd know if I need stuff like an USB hub (because not enough USB ports) or UART multiplexing (because of not enough UART pins).  If I need the latter, I'd have to build and test it, of course: the simplest one is a microcontroller with three or four UARTs, and a hardware USB or USB serial interface.  If I expose each UART as a separate USB Serial endpoint, they'll appear as separate devices in Linux (and as I understand it, in every other OS too).

At this point, I know the hardware I will be using.  My efforts at this point would be divided into two separate, parallel "branches".  One is the SBC to microcontroller serial protocol or protocols and data archival, and the other is the software on the SBC to expose the sensors to the web.

I have done quite a few experiments on SBC to microcontroller communications, which lead me to a simple query-response interface with some error checking (CRC), with each query and response associated with an integer key –– so that more than one query can be pending at the same time, and answered in any order.  I don't want to configure the sensors on the SBC, and instead have each microcontroller respond with what sensors it has connected, because to add or replace sensors, I'll have to program a microcontroller anyway.  If I keep the microcontroller and sensors together, say with labels, anyone can reconnect them correctly even after a storm: if the connector fits, it can go there, and it will work.

(I would use an overly large aluminium enclosure for the SBC, with just the cables poking out.  It's easier to make rain-proof, especially if you mount it upside down, i.e. lid downwards.)

Because of Unix philosophy (and knowing that two simple codebases are easier to maintain than one complex one), I would implement the microcontroller firmware(s), and the following parts running on the SBC, at the same time or in random order:
  • the archive service
    This polls the microcontroller(s) for sensor readings using a schedule described in a configuration file.
    Since the archive service is the only one updating/modifying the data, I would likely use SQLite3 for the data archival.
     
  • the web data interface
    This could be a simple WebSocket service running on loopback address, so only accessible within the SBC.
    Queries can ask for the list of sensors, the range of dates the archive spans, and the readings of specific sensors (matching glob patterns) across specific dates and times.
    The web server, Apache or Nginx, would act as a WebSocket proxy for authorized connections.  The exposed WebSocket end would require TLS.
     
  • the web pages and JavaScript displaying the sensor data
    Most of this is HTML and JavaScript that is run on the client browser, constructing and displaying nice charts from the selected date/time combinations (whatever the Master Gardener, aka Mom, can use).

    The other part of this is cookie-based authentication, so that only IP addresses within the local network can access the TLS-secured sensor pages (the web pages, and the WebSocket proxy) without providing any password, and everyone else would have to supply an username and a password.  (Those would be stored in salted hash form.)

    If the local network is protected by a firewall that stops packets claiming to be from a local IP address from coming from outside the local network, and you don't mind anyone cracking your WiFi connection (or getting access to the SBC) from looking at the sensor data, this should suffice for security.
Those listed above are "normal" application development (and typical backend or server-side stuff), as opposed to embedded/microcontroller development.  For the archive service, I'd use C (GNU and POSIX variant) or Python (using raw termios instead of libserial or other serial libraries); for the web data interface/websocket I would use either Python or C; and the third is web service configuration, HTML, Javascript, and a smattering of CSS to make it look nicer.

Overall, only a general understanding of how TCP/IP and the underlying layers work, is needed.  TLS will be provided by the web server, so understanding the principle of TLS suffices, but you need to understand how certificates, server key, and certificate authorities work.  In particular, I would probably use a self-signed certificate that family members must install into their browsers.

For the archive service, understanding how termios (the Linux tty layer) is controlled to not mangle your newline characters or buffer entire lines, is useful.  Instead of doing silly serial scanning, I'd have my udev daemon create a device symlink for each microcontroller, so that the archive server uses those symlink names (defined in the configuration file, or scanned using a simple glob pattern – say /dev/sensor-mcu-*).  I would treat the serial communications as fully asynchronous – that's why each request needs an identifier, so that each response has that same identifier, and more than one request can be "in flight" at the same time – so the read and write sides are essentially separate.

Typical shenanigans I do to ease my development effort would include generating a dummy SQLite3 database while writing the WebSocket and web page stuff.  I would likely do that and the web page stuff before starting on the websocket stuff, because I might find out a particularly efficient and nice way of keying the SQL data.  If I started with the WebSocket stuff (that in reality, generates that database), I'd have to consider whether rewriting that part is worth the effort, leading to unnecessary compromises just because I chose a bad order of doing deciding things.

While writing the WebSocket stuff, I can use either netcat to do requests and responses, or write a helper using a scripting language like Python, using some existing WebSocket client library.  For the web page stuff, I can do the opposite, using a scripting language with some existing WebSocket server library, that the HTML and JavaScript can query.

I would also have to integrate some kind of fail2ban on the SBC, so that there is at least hope of surviving a request flood  (Based on log files, fail2ban can cause the SBC to completely block communications from related IP addresses.  For example, three wrong passwords to a login form, and you're out for 24 hours; failed SSH connection and you're out for 36 hours, and so on.  All configurable, of course.)

Is this the most sensible approach?  Heck if I know! ;D  It is my approach, based on my personal preferences and experience.  I don't think many would do it like I do, and like I said, I'm not the most efficient developer time-wise.  The idea of putting a globally accessible web server at the center of a garden patch tickles my fancy, too, but I don't want it to be too easily broken or DDOS-hammered by people I don't know.
 
The following users thanked this post: abeyer

Offline abeyer

  • Frequent Contributor
  • **
  • Posts: 292
  • Country: us
Re: Required protocols
« Reply #41 on: October 28, 2023, 08:27:17 pm »
If the local network is protected by a firewall that stops packets claiming to be from a local IP address from coming from outside the local network, and you don't mind anyone cracking your WiFi connection (or getting access to the SBC) from looking at the sensor data, this should suffice for security

I think this is exactly the outline OP needs, and makes a bunch of good assumptions and provides good general directions to pursue (while omitting some dubious ones like writing tcp/ip stacks from scratch.)

One thing I'd take issue with is the network based authentication, though. Time has repeatedly shown that doing this has been a source of security vulnerabilities. I guess it depends on your personal risk tolerance and need for conveniences... but this doesn't seem like a good trade-off to me. Firewalling is the bare minimum, but is difficult to use to mitigate scenarios where a device already on your local network can be abused to inject traffic that is trusted and relay it back out again. This type of attack has had many real world scenarios where it's been a step in an attack... even something as simple as some malicious javascript in a web page you browse from a local machine can be used to do this. (that specific vector has been mitigated, but it's just one example... it could just as easily be an abusive or buggy consumer IoT device, or literally any other software issue on a computer that isn't a full privilege compromise, but gives the ability to access the network and escalate from there.)

If you're issuing a self-signed certificate anyway, and really want your internal users to be able to access it without a password, I might consider client certificates... they're difficult to set up in mass for large user bases because of the configuration needed in the browser, but you're going to have to do that anyway for your self-signed cert, so seems like a nice solution for this particular case.
 
The following users thanked this post: Nominal Animal

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #42 on: October 28, 2023, 09:03:19 pm »
One thing I'd take issue with is the network based authentication, though.
True; good points to consider.

The balance here is between the hassle for users, and the sensitivity of the data being protected.  If I included video, I would not be satisfied with IP-address based access control, that's certain.  (In my own networks, I prefer to keep local cameras in a separate network, and not even cross WiFi.)

Firewalling is the bare minimum, but is difficult to use to mitigate scenarios where a device already on your local network can be abused to inject traffic that is trusted and relay it back out again.
It is very true.  My choice in this case was based on the fact that any device already on your network can provide the same data.
(I might have to reconsider the local access (IP-based access control), too, because of the archive of sensor readings.  And a camera changes my security needs anyway.)

Client certificates are a good option, but can be challenging for nontechnical people to use, depending on the device/browser.
(Basically all browsers ask whether you want to install a server certificate, when trying to open one, making them relatively easy to use as they're silent afterwards.  When a server requests a client certificate, most browsers prompt the user to select one, and this is so rare nontechnical users may get confused.)

Fortunately, the various user-facing authentication/access control options can easily be experimented with with the users involved.  It doesn't matter much what is being protected from the user's point of view, and finding out what works for the users before deciding, is a good idea.  For example, with nontechnical users, giving them secure passwords in this kind of scenario will likely lead to the username and password being posted on a note somewhere easily visible to any visitor –– or they'll keep calling you and asking for it, because they forgot and lost the note.

A key point is that the security scheme should be carefully designed in at the planning stage, and tested on end users.  Even though this approach gives a lot of leeway –– access being controlled by the web server on the SBC, so relatively independent of the other parts –– it is not generally something that can be bolted on top later on.
« Last Edit: October 28, 2023, 09:09:35 pm by Nominal Animal »
 
The following users thanked this post: abeyer

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #43 on: October 29, 2023, 03:37:28 pm »
Perhaps this helps:

Overall, only a general understanding of how TCP/IP and the underlying layers work, is needed

I believe it's the right time to transition from theory to practical experience. I have an ESP32 and plan to use it for experiments. Initially, I'm keeping things simple by working with LEDs and the ESP32. My idea is to run the server code on my PC and the client code on the ESP32 to control the LEDs via a local web interface. While I'm not an expert in Python, I intend to use Python, HTML, and CSS for the server, and C programming for the ESP32. I can write simple code to control the LEDs on the ESP32. I understand that this project might not be easy, but I'm committed to investing my time in it.

Do you have any experience with the ESP32? I'd appreciate your suggestions on the primary steps and approach you'd recommend for this project.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #44 on: October 29, 2023, 05:26:42 pm »
I believe it's the right time to transition from theory to practical experience. I have an ESP32 and plan to use it for experiments. Initially, I'm keeping things simple by working with LEDs and the ESP32. My idea is to run the server code on my PC and the client code on the ESP32 to control the LEDs via a local web interface. While I'm not an expert in Python, I intend to use Python, HTML, and CSS for the server, and C programming for the ESP32. I can write simple code to control the LEDs on the ESP32. I understand that this project might not be easy, but I'm committed to investing my time in it.
That sounds good.

Remember that GPIO pins cannot supply much current, only a few milliAmps, and that LED current above the forward voltage shoots up.  Because ESP32's use 3.3V logic levels, this means that you'll want to use red LEDs with 1.6V to 2.2V forward voltage, and a current-limiting resistor in series (between the pin and the LED).  I would use a 1kΩ, 680Ω, or 470Ω resistor here, testing with a separate 3.3V voltage supply which one is bright enough.

To drive more than one LED, and to minimise the current drawn from the GPIO pin (and instead powering the LED or LEDs from a separate supply) you can use BJT transistors or MOSFETs.  The WS8212, APA102, and other addressable RGB LEDs use fancy communications protocols, so better start with ordinary LEDs directly connected to GPIO pins with a suitable current-limiting resistor.

For the Python side, you have two basic approaches:
  • Use a Python HTTP server library –– a basic, non-secure http.server module is built-in; definitely good enough for experiments, not good enough for actual deployment on public internet ––, so that your Python code (via the library) directly communicates with the browsers
  • Use a real HTTP(S) server like Apache or Nginx, that for specific browser requests (URLs) runs your Python code as WSGI or FastCGI script
The first one is simpler, because configuring Apache/Nginx correctly takes some effort.  Also, in the first case, your Python code runs all the time, while in the second, only for the duration needed to serve a specific result or web page –– for example, for a GET or POST request to toggle the LED.  I'd start with Python 3 and http.server, and only when ready to move to public web facing stuff, would tackle Apache/Nginx on top.

Edit to add:

It is also possible to run a simple HTTP server directly on most ESP32's, so you don't need the Python/server-side stuff at all: you use your browser to directly connect to the ESP32, which then provides both the HTML pages, and acts on HTTP requests to change the LED states.  That might be the best option to start with, I believe.

Then, you could write some Python code running on a "real" server, connecting to the ESP32 using raw TCP/IP and/or UDP/IP, to get a feel on that too.
Practical experiments and experience trumps theoretical knowledge!

Do you have any experience with the ESP32?
I have some ESP32 modules and a couple of boards (like WEMOS/Lolin ESP32 with OLED display), but most still unopened  :P
I've done some experiments, but nothing I'd suggest for starting with.

First, make sure you have Python 3 installed – do read this link, because it lists the various ways of installing Python 3 on various operating systems.  Latest version of Python, 3.12.0, was released in October 2, 2023, but any active Python 3 version is okay.

For the ESP32, I cannot decide whether I should suggest Arduino, or PlatformIO.  I'm tempted to suggest starting with Arduino, but moving to PlatformIO as soon as you feel ready.  If you do feel comfortable with freestanding C and/or C++ (where most of the standard libraries are not available, nor are some things like exceptions; so it really is a subset of the C and/or C++ used in hosted applications (those running a fully featured OS, that is; "freestanding" and "hosted" are the terms used in the C standards), then I'd say start with PlatformIO, skipping Arduino.

Arduino sketches (.ino files) are almost, but not exactly freestanding C/C++ on top of a minimal C library and Arduino's own libraries.  Arduino uses its own pre-preprocessor that adds function declarations and such, and its own build machinery based on hidden Makefiles, but the amount of tutorials and documentation alone makes it a reasonable point to start with.  In addition to the Arduino development environment itself, you'd also need to install the support package for your ESP32 board.
That is, the learning curve is then gentle, you'll get visible results faster and with less frustration early on, so your motivation will stay high.

At the core, PlatformIO is a way to set up the toolchain and basic board support files needed to program your board using freestanding C/C++ (GCC and/or Clang), as long as your board is supported in PlatformIO.  You can write and build Arduino sketches in PlatformIO too.
You can use the powerful IDE, or you can use the command-line core and use whatever editors you like.  For building and uploading the firmware, you'll use the platformio command-line command, which will then run (and display) the actual commands it uses.  This means that you can, if you want, move your project "out" from under PlatformIO, and make it the simplest/ancient-style plain Makefile-based "bare metal" project, and not be tied to the PlatformIO framework.

I personally still use Arduino (and Teensyduino add-on, for Teensy boards) for many quick temporary tools and experiments, because I am pragmatic: tools are just tools.  I don't like being overly constrained by a tool, and I do need the ability to move my code/project out from the framework, and I can do that from and between Arduino and PlatformIO and "bare metal" code.  For my ATmega32u4 boards, I do both "bare metal" (using just avr-gcc, avr-libc, and avrdude) and Arduino stuff.

While EspressIf (the manufacturer of the ESP32 modules) does support Arduino and PlatformIO, not all board vendors do, so if you know exactly which ESP32 board you're going to use, I might be able to give a bit more detail.



For example of the board support in the Arduino environment, one of the 8-bit microcontrollers I really like is ATmega32u4, because it has a native USB 1.1 port – limited to 12 Mbits/s or about one megabyte per second via USB – and there are several bootloaders that make it USB-programmable so only the board and an USB cable is needed.  It is very commonly used in e.g. custom-built USB keyboards and other Human Interface Devices.

Years ago, SparkFun created a Pro Micro board as Open Source Hardware.  Soon, clones became available for cheap on eBay and elsewhere.  Because of the last two or three years of chip shortage, ATmega32u4 was hit, and the prices went significantly up (and became unavailable for a while). (There are also many sellers on eBay who are confused about what microcontroller they actually have on the board, so one needs to check the provided images to check the text on the microcontroller chip itself; just the name they sell the board as is not reliable enough.)
While these clones were based on the SparkFun Pro Micro schematics and had the SparkFun Pro Micro pinout, they used the bootloader from Arduino Leonardo, a much larger board but also with ATmega32u4 microcontroller on it.

Thus, in the Arduino editor, you must select 'Arduino Leonardo' as the board, but in your code, use the SparkFun Pro Micro pinout.

There are a few different ESP32 modules, and a lot of different boards using one of those ESP32 modules as their microcontroller.  The differences are mostly in what pins are exposed, in what order, and what peripherals (like LEDs, RGB LEDs, an OLED display module, voltage level translators for interfacing to 5V logic, etc.) are hard-wired in.
The ESP32 "cores" themselves tend to be fully supported (by EspressIf themselves partnering with PlatformIO and others, too), but not all boards have an Arduino/PlatformIO description/support, leading to similar inconsistencies as in Arduino with "Pro Micro" clones, and even uploading/programming problems.
(You can imagine how annoying it is when the GPIO pin numbers silkscreened on the board do not match the definitions in your code, so UART/SPI/I²C etc. does not work, for example.)
« Last Edit: October 29, 2023, 05:32:15 pm by Nominal Animal »
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3374
  • Country: ua
Re: Required protocols
« Reply #45 on: October 29, 2023, 05:29:15 pm »
In the discussion, my primary focus is on understanding the protocols used between the server and clients. I'm more interested  to gain a clear understanding of how communication happens between server and clients

protocol stack for HTTP is pretty easy: HTTP->TCP->IP->Ethernet->PHY

But TCP protocol is very heavy for MCU, it's better to use UDP because it is more lightweight and don't require packet retransmission.

Most of MCU and FPGA systems using UDP stack: UDP->IP->Ethernet->PHY
 

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: Required protocols
« Reply #46 on: October 30, 2023, 04:50:09 pm »

For the Python side, you have two basic approaches:
Use a Python HTTP server library –– a basic, non-secure http.server module is built-in; definitely good enough for experiments, not good enough for actual deployment on public internet ––, so that your Python code (via the library) directly communicates with the browsers
Use a real HTTP(S) server like Apache or Nginx, that for specific browser requests (URLs) runs your Python code as WSGI or FastCGI script

I've come across two key components, Flask and the Apache server, which are used in this field. I'd like to better understand their differences

From what I understand so far:

Flask is a Python web framework designed for building web applications. It use  their own server

Apache, on the other hand, is a web server software used to handle incoming HTTP requests.

I'm interested to know more about why and which one fits into our requirements.
 

Offline abeyer

  • Frequent Contributor
  • **
  • Posts: 292
  • Country: us
Re: Required protocols
« Reply #47 on: October 30, 2023, 05:43:30 pm »
Flask is a Python web framework designed for building web applications. It use  their own server

Flask does have some built in server support, but like the stdlib http module it really is meant as a development and debugging tool and not a production server. For actual production scenarios, you would run your flask application inside of a WSGI host to serve the application content. (eg gunicorn, uwsgi, etc...) You can think of WSGI as essentially the interface that translates from http requests to python function calls.

Apache, on the other hand, is a web server software used to handle incoming HTTP requests.
And it can handle them in several ways. The obvious is the traditional approach of just serving files from disk. However it can also act as a proxy in front of any other http server... like your WSGI host. This has some advantages, as WSGI hosts may not be optimized for serving high latency requests over the internet, and are not efficient for serving static data (images, css, js, etc...) that go along with your application.

So typically you'd set up something like this, where apache (or similar) is your public internet facing server that users connect to, and it serves both static file content directly, and then proxies dynamic requests to your wsgi container, which calls your python code:
Code: [Select]
┌────────┐    ┌──────────┐    ┌─────┐
│ apache ├─┬──► gunicorn ├────►flask│
└────────┘ │  └──────────┘    └─────┘
           │
           │  ┌────────────┐
           └──► filesystem │
              └────────────┘
 
The following users thanked this post: Nominal Animal, Mtech1

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Required protocols
« Reply #48 on: October 30, 2023, 08:01:34 pm »
I concur with Abeyer above.

Both Apache and Nginx provide http/https services.  The reason to use them as opposed to Flask or http.server, is that they are designed to face the open internet.  This forum, for example, runs under Nginx.  Both are also easy to configure to provide TLS security, so that the server-side scripts and sub-services don't need to handle that at all.

If we use "httpd" for either Apache or Nginx (as in http daemon), then the basic scheme is this:
                             ┌── files
    Browser ═══ Apache/Nginx ┼── FastCGI/WSGI scripts
                             └── proxying
The double-lined part is normally protected with TLS, but since it is done by Apache/Nginx, we do not need to know its implementation details, only the principles of how transport layer security (and certificates) work, and how to configure Apache/Nginx correctly for them.

Files includes HTML files, JavaScript files run on the client browser, images, and so on.  In Linux, Apache/Nginx is configured to run as a specific user and group, so that normal filesystem access controls are easily used to limit what files can be accessed via the server.

FastCGI/WSGI scripts use an agent, a bit misleadingly called a FastCGI/WSGI "server".  In Linux, they also run as a specific user and group, and can be started either separately (run as a separate service), or by Apache/Nginx as needed.  The FastCGI/WSGI "server" communicates with Apache/Nginx using a socket connection, and forks (and executes if a separate script is to be run) to provide the response for each query.

Proxying here refers specifically to reverse proxying, where Apache/Nginx forwards the request for something else to handle.  The contents will still be under Apache/Nginx control (it is even possible to edit HTML content on the fly using pattern matches), and the actual protocol used for the result can be anything the used Apache/Nginx proxy module supports; there are a lot of choice.  In OP's case, WebSocket proxying is of particular interest.

(I have used reverse proxying on real servers, for example so I could run an "untrusted" Java-based http server as a normal user, with Apache reverse-proxying it with TLS support and fixing up any links in the Java-produced HTML code.  This meant that while it was accessible from anywhere on the internet, I could trust the Apache-based TLS implementation and access control, and could treat the Java server as if it was a human user I had limited trust in.  It is not a "hack" when correctly used.  Nginx in particular tries hard to make reverse proxying efficient, too.)

If you trust the network between the machine running Apache/Nginx and the machine running the WebSocket service, you can use plain TCP/IP, saving resources and effort by not implementing TLS.  If they run on the same machine, one can use a IPv4 loopback address (127.x.y.z) which is not accessible at all outside that machine, or better yet, an Unix domain stream socket.  (It is easy to verify the privileges of the other side when connected via Unix domain stream socket, but it is limited to within the same machine.)
However, ESP32 does have TLS support, too.

In practice, if your Apache/Nginx responds to request for hostname yourserver.yourdomain, and you have TLS configured, you might use
    https://yourserver.yourdomain/garden/sensors/anything
for a websocket that gets forwarded from Apache/Nginx directly to the ESP32 module (including the anything part), the response generated there, and flown back via Apache/Nginx to the user browser.  If the network is untrusted, TLS and both server and client certificates can be used to stop unauthorized access to the WebSocket provided by the ESP32 module.  At the same time,
    https://yourserver.yourdomain/no-slashes-name
    https://yourserver.yourdomain/js/anything
    https://yourserver.yourdomain/css/anything
could serve static files (from e.g. /var/www/yourserver.yourdomain/static/, /var/www/yourserver.yourdomain/static/js/, and /var/www/yourserver.yourdomain/static/css/, respectively), and
    https://yourserver.yourdomain/control/anything
be redirected through your FastCGI "server" to an executable script at /var/www/yourserver.yourdomain/fastcgi/anything.

With ESP32, as long as the ESP32 IP address is known and visible to the user browser, then
    Browser ═══ ESP32
is also possible.  The problem there is that if too many browsers try to connect to ESP32 (or any microcontroller) at the same time, it tends to become unresponsive; doing this intentionally is called a Denial of Service Attack.  (If you use multiple machines to connect to the same target, it is Distributed Denial of Service Attack, or DDOS.)
Both Apache and Nginx can be configured to limit the number of requests they serve, and the number of connections they proxy, at any given time; and client and server certificates (or even shared secrets or challenge-response pairs in request headers) can be used on the ESP32 end to reject all other accesses except authorized ones.

To simplify the microcontroller implementation, a FastCGI/WSGI script can also make a raw TCP/IP or UDP/IP connection to the ESP32 module.



In reply #40, there is an additional service, written in C or Python, that communicates with the ESP32 or other microcontroller module (using UDP/IP) to poll the sensors, saving them in an archive (SQLite database on the server), that runs regardless of whether Apache/Nginx is even running.

A WebSocket service, also written in C or Python, running on the same machine as Apache/Nginx and only accepting connections from Apache/Nginx (if using an Unix domain socket between the two), responds to sensor data requests by reading those SQLite databases, with the data in a JavaScript-friendly format.

Client-side certificate verification would be done by Apache/Nginx.  There are many different ways username+password authentication can be done, but typical cookie-based ones involve an authorization agent that compares the username+password pair to a database of username+salt+hash-of-salted-password triplets (with unique usernames); often implemented via FastCGI or WSGI.

Static HTML+CSS+JavaScript pages would provide the human users ways to select which sensors and what date/time ranges to view, and so on.
JavaScript code would dynamically pull data from the WebSocket service; including real-time pushing of new data points by the WebSocket service.

If we replaced the WebSocket service with a FastCGI/WSGI script, then the page itself would need to reload to update the shown data.

Both approaches are valid.

I chose the WebSocket one, even though it adds a bit of complexity, because the WebSocket approach allows the WebSocket service to continually push updated sensor readings for existing connections –– the browser can poll, or can simply wait for the service to provide updates ––, and typical human users nowadays expect continuous updates (as opposed to the page automatically reloading every now and then to show hopefully updated data).

Furthermore, if one rents a virtual private server that connects to the WebSocket service running on an SBC or an ESP32, it is easier to defend.  Blocking general internet access except from the virtual private server provider (either particular private server IP address, or the range of IP addresses) limits any DDOS'ing to those virtual private servers; then, TLS + certificates or challenge-response authentication can secure it so that only the Apache/Nginx under your control proxying the WebSocket connections, can access the actual WebSocket service.  You could keep the sensor archives on the SBC or ESP32 then, so you'd have a bit simpler
    Browser ═══ VPS:Apache/Nginx ┬── WebSocket proxying ━━━ ESP32 (camera and sensors)
                                 ├── files
                                 └── FastCGI/WSGI scripts
where the double line denotes TLS-secured TCP/IP (HTTP or WebSocket) connection, thin single lines denote stuff internal to the Virtual Private Server, and the thick single line denotes either a TLS-secured TCP/IP (WebSocket) connection, or a custom secured UDP/IP or QUIC/IP connection.
(VPS providers often take backups of the server files, have some uptime guarantees, and usually have some experience dealing with DOS/DDOS, so depending on the price, it can be a better deal than managing your own server at your home, especially if you have just a single internet connection which could be easily overwhelmed by even accidental Slashdotting.)
The downside of this is that if viewing the ESP32 data on the local network, the data goes to the VPS and back.

When you have a small SBC on the local network acting as a server, then the overall scheme with a VPS can be
    LanBrowser ═══ SBC:A/N ┬── files
                           ├── FastCGI/WSGI scripts
                           └── WebSocket proxy ┐
                                               ╠══ SBC:SensorWebSocket ━━━ ESP32
    ExtBrowser ═══ VPS:A/N ┬── WebSocket proxy ╝
                           ├── files
                           └── FastCGI/WSGI scripts
where the sensor web socket service on the SBC is the only one directly connected to the ESP32, and internally also archives the sensor data.  The Virtual Private Server WebSocket proxying can go through the SBC Apache/Nginx (being proxied twice), or the web socket service can (also) be connected via TLS-secured TCP/IP from the VPS.
This scheme allows one a completely separate access controls and authentication on the local network (configuring the Apache/Nginx on the SBC), but means you need to maintain the configuration on two different instances of Apache or Nginx, each with different configuration.
 
The following users thanked this post: abeyer, Mtech1


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf