It seems to me that anyone who sets up a device which has a "server" on it, and is on an open port, is going to see it trashed eventually.
No, unless by "anyone" you mean "anyone who does not know how to do it securely".
Of course, you cannot add security on top, making an insecure thing somehow secure, so if the code you use is insecure, it is insecure. You can cage an insecure thing, so that if compromised, only the contents of that cage get trashed, but that's it.
The first step in securing
anything is to make sure it runs with minimum privileges necessary. You don't need to run a web server as root, just because you want it to be able to bind to ports 80 (http) and 443 (https). The (second) worst mistake, and the reason why Wordpress, phpBB etc. are inherently insecure, is giving the server write access to itself. (The worst mistake is doing something like
chmod 0777 --recursive /var/html, which is Linux/POSIX-speak for "feel free to write anything here, everybody! I'll publish it on the net for you.") Essentially, you can make sure that even if the server is compromised, the attacker cannot do anything, unless they already have a privilege escalation. In Linux, this involves understanding
capabilities,
prctl(), and limiting resources available to a single process via
setrlimit().
The second step is to use a firewall, and apply some sort of fail-to-ban. The
fail2ban is easiest to use on POSIXy machines with enough RAM, but there are other options for embedded/SBCs with limited amount of RAM. Essentially, you use failed connection attempts from the same source to tell your firewall to completely drop any connection attempts from that address for a specified period (depending on the attempt); I use 24 hours. I use fail2ban on all my machines, actually, triggering on at least failed SSH connection attempts. This takes care of brute-force attacks. Additional stuff like
tripwire can be used to detect successful intrusion at the point where they manage to modify the configuration files or system binaries.
(I've been tinkering with a really lightweight daemon that listens on specific ports, and blocks (adds temporary firewall rules) for any IP addresses that attempt to connect to those ports. It is more severe than fail2ban, but also requires very little memory and kernel resources to run; very suitable for routers and such.)
The third step, assuming Linux-based embedded system, is to enhance the server code with an unrevokable seccomp filter. Essentially, you limit the syscalls the server process can perform, using a kernel-based filter. If your server simply reads some files and maybe collates them somehow, and serves them to clients (a dashboard of some kind), you can limit it to an extremely small set of syscalls. (This isn't difficult after you discover the set of syscalls you need –– and you can use
strace to check at run time ––; see
this example "my friend" Blabbo the Verbose has posted at SO.) This seccomp filter minimises the attack surface in case the server itself is compromised, so that privilege escalation via a kernel bug is even less likely to happen.
The fourth step is a mix of using a separate SBC to collect the logs from the internet-facing machines, which themselves don't have any ports open to the internet, and are only accessible via your LAN. This lets you check the status of your various machines in a centralized location. In the case one of the server machines is compromised, it is unlikely they manage to also break into your logging machine (because you use different usernames and passwords to access the logging machine!), so they cannot clean up after themselves.
So, as you can see, although risk mitigation isn't "trivial", there are many extents to which you can go. The very first step is to limit the attack surface in the case where an attacker manages to exploit an existing security hole. Then you can choose additional steps to reliably detect if that happens, and to strenghten the security of existing server code.
The embedded devices I have exposed to the internet right now are all router-type devices, all of them running some variant of
OpenWRT. It is not perfect, but most of the related
security advisories are to do with the management interface and not the router capabilities. Which is why I configure mine with one dedicated physical admin port, so that the admin interfaces are not accessible at all via WLAN or from the internet port. I prefer to have a separate outer edge firewall with an SSH service on a nonstandard port, and an instant ban on attempts to the standard SSH port (22), and my wireless routers just bridges between WLAN and (untrusted) LAN. This means that you cannot actually access my WLAN/LAN routers from the internet (even if you get through my firewall) nor via a wireless connection, since they don't have an IP address exposed.
(There would have to be a lower, ethernet level security hole in the linux kernel, to get through. It's such a simple and old protocol, I highly doubt such a hole exists, even to state-level actors.)
The discussion on how to write service-type code that
is secure from external attacks, is somewhat related, and a topic woefully full of garbage and misinformation on the net.
DJB's
qmail is a good example of how to
design security in to code (although I cannot really decide whether the integer overflow bug, the only serious one ever found, ought to have earned the $500/$1000 bounty), and yet, a LOT of people really, really dislike DJB's code.