Products > Embedded Computing

Running a web server on an embedded system - risky?

(1/7) > >>

It seems to me that anyone who sets up a device which has a "server" on it, and is on an open port, is going to see it trashed eventually. There are thousands of chinese and russian port sniffers running constantly, scanning IPs, so even if you don't publish the DNS it will be discovered in hours, and hit with a string of attacks.

The server code will be something which came with Cube IDE or whatever i.e. poor quality code, with whatever patches applied as they were found on stackexchange and other forums. It won't be up to date in any sense that a normal web server, or PHP or whatever language is used on it, is kept updated as new attacks come to light. Embedded systems are frozen, with no regular updates like e.g. windows gets.

If there is no hard disk or other filesystem storage, the damage probably won't be permanent, in the sense that a power cycle will get it back running again.

I know this is the standard "IOT security" debate, but most people are addressing this by requiring all sorts of authentication/encryption, so you end up running HTTPS which is a massively bloated load of code, with its own quantity of bugs, and it needs a lot of memory (two 16k buffers, for a standard server implementation) which is a struggle to find on smaller CPUs. But that isn't the point; the back doors won't be found by cracking TLS, cracking AES256, etc. They will be right there in the same places where they always were e.g. you will be able to crash the server by sending it some malformed packet which overflows the memory. Hackers get plenty of enjoyment just by crashing some device; it is not necessary to steal 1000000 credit card details :)

When one looks at how much work, over so many years, has gone into patching up the various server operating systems, the embedded scene has no chance. And once a back door is found, in most cases nobody will be deploying patches remotely. Anyone doubting this only needs to look at the world of consumer routers. They are full of bugs, mostly not fixed, and any fix, if made available, needs to be downloaded by the user, in the form of a firmware upgrade.

Most patches for embedded platforms are never published because they were developed in a company's paid time so the dev won't be uploading them anywhere.

Nominal Animal:

--- Quote from: peter-h on November 12, 2021, 06:56:00 am ---It seems to me that anyone who sets up a device which has a "server" on it, and is on an open port, is going to see it trashed eventually.
--- End quote ---
No, unless by "anyone" you mean "anyone who does not know how to do it securely".
Of course, you cannot add security on top, making an insecure thing somehow secure, so if the code you use is insecure, it is insecure.  You can cage an insecure thing, so that if compromised, only the contents of that cage get trashed, but that's it.

The first step in securing anything is to make sure it runs with minimum privileges necessary.  You don't need to run a web server as root, just because you want it to be able to bind to ports 80 (http) and 443 (https).  The (second) worst mistake, and the reason why Wordpress, phpBB etc. are inherently insecure, is giving the server write access to itself.  (The worst mistake is doing something like chmod 0777 --recursive /var/html, which is Linux/POSIX-speak for "feel free to write anything here, everybody!  I'll publish it on the net for you.")  Essentially, you can make sure that even if the server is compromised, the attacker cannot do anything, unless they already have a privilege escalation.  In Linux, this involves understanding capabilities, prctl(), and limiting resources available to a single process via setrlimit().

The second step is to use a firewall, and apply some sort of fail-to-ban.  The fail2ban is easiest to use on POSIXy machines with enough RAM, but there are other options for embedded/SBCs with limited amount of RAM.  Essentially, you use failed connection attempts from the same source to tell your firewall to completely drop any connection attempts from that address for a specified period (depending on the attempt); I use 24 hours.  I use fail2ban on all my machines, actually, triggering on at least failed SSH connection attempts.  This takes care of brute-force attacks.  Additional stuff like tripwire can be used to detect successful intrusion at the point where they manage to modify the configuration files or system binaries.
(I've been tinkering with a really lightweight daemon that listens on specific ports, and blocks (adds temporary firewall rules) for any IP addresses that attempt to connect to those ports.  It is more severe than fail2ban, but also requires very little memory and kernel resources to run; very suitable for routers and such.)

The third step, assuming Linux-based embedded system, is to enhance the server code with an unrevokable seccomp filter.  Essentially, you limit the syscalls the server process can perform, using a kernel-based filter.  If your server simply reads some files and maybe collates them somehow, and serves them to clients (a dashboard of some kind), you can limit it to an extremely small set of syscalls.  (This isn't difficult after you discover the set of syscalls you need –– and you can use strace to check at run time ––; see this example "my friend" Blabbo the Verbose has posted at SO.)  This seccomp filter minimises the attack surface in case the server itself is compromised, so that privilege escalation via a kernel bug is even less likely to happen.

The fourth step is a mix of using a separate SBC to collect the logs from the internet-facing machines, which themselves don't have any ports open to the internet, and are only accessible via your LAN.  This lets you check the status of your various machines in a centralized location.  In the case one of the server machines is compromised, it is unlikely they manage to also break into your logging machine (because you use different usernames and passwords to access the logging machine!), so they cannot clean up after themselves.

So, as you can see, although risk mitigation isn't "trivial", there are many extents to which you can go.  The very first step is to limit the attack surface in the case where an attacker manages to exploit an existing security hole.  Then you can choose additional steps to reliably detect if that happens, and to strenghten the security of existing server code.

The embedded devices I have exposed to the internet right now are all router-type devices, all of them running some variant of OpenWRT.  It is not perfect, but most of the related security advisories are to do with the management interface and not the router capabilities.  Which is why I configure mine with one dedicated physical admin port, so that the admin interfaces are not accessible at all via WLAN or from the internet port.  I prefer to have a separate outer edge firewall with an SSH service on a nonstandard port, and an instant ban on attempts to the standard SSH port (22), and my wireless routers just bridges between WLAN and (untrusted) LAN.  This means that you cannot actually access my WLAN/LAN routers from the internet (even if you get through my firewall) nor via a wireless connection, since they don't have an IP address exposed.
(There would have to be a lower, ethernet level security hole in the linux kernel, to get through.  It's such a simple and old protocol, I highly doubt such a hole exists, even to state-level actors.)

The discussion on how to write service-type code that is secure from external attacks, is somewhat related, and a topic woefully full of garbage and misinformation on the net.  DJB's qmail is a good example of how to design security in to code (although I cannot really decide whether the integer overflow bug, the only serious one ever found, ought to have earned the $500/$1000 bounty), and yet, a LOT of people really, really dislike DJB's code.

As far as I know due to my little personal experience with a web-server exposed to the internet 24h/24,  even if you choose an "hardened" GNU/Linux distribution, the problem is not the http server itself (1) but rather the language-support and its requirements.

For example, php suffers from some problems that can be exploited to damage a mySQL database. Python is a better option there and Ruby on Rail is the best, unfortunately many web applications are written in PHP so you face a practical choice:

* either you "trust" a piece of PHP code that should have been "sanitized" (cleaned up)
* or you have to study and learn Python or Ruby to rewrite the web application yourself.
(1) { Lighttpd., NGINX, apache2, ... }

Nominal Animal:
(I don't know everything either, obviously.  I have done web server development and maintenance and web master stuff since '97, sometimes as paid work, sometimes as a volunteer, occasionally just for fun; but always with an emphasis on the security.)

--- Quote from: DiTBho on November 12, 2021, 10:39:14 am ---As far as I know due to my little personal experience with a web-server exposed to the internet 24h/24,  even if you choose an "hardened" GNU/Linux distribution, the problem is not the http server itself (1) but rather the language-support and its requirements.
--- End quote ---
True, but this affects all web services, not just embedded ones.

One really should run any scripts using a dedicated, less-privileged account, with the script files owned by a yet another account, so that the scripts cannot modify themselves.  With PHP and Python, this is easiest to implement using FastCGI right now.  The annoying thing is, none of the web hotels (cheap web site hosting services) support multiple OS accounts, so one needs to configure and maintain their own (virtual) server to do this right.  >:(

When one does go that route, the very first thing to do is utterly counterintuitive: you'll want to change the location of both the runtime configuration of your server (Apache, Nginx, Lighttpd, etc.), and their document roots.  The reason is that you need to be in control of the configuration, not the package manager, because only you can tell what configuration is optimal for your use case.  Keeping the original configuration directories means you can occasionally easily check if the default configuration changes anything important (thus far, basically only when switching server major versions), and "import" any useful changes to your own config.

I used to have a web page showing an example of such configuration.  I've been thinking of switching to a virtual server (I'm using a web host now), but the cost difference has thus far been bigger than how much I care, because the web is full of related garbage advice, and I don't know any way to prove (except for my own experiences in various environments, including universities) that it works better.  Just being another voice among a shouting throng isn't my scene, really.

The above responses - while I agree 100%, to the extent of my "proper server" involvement - are based on devices which have a "proper OS" (i.e. a target), have a hard disk (i.e. a target), have scripting languages (i.e. a nice target), run open source code (a fantastically easy target), have firewalls (often badly configured) but some embedded device won't have a fail2ban feature because that needs storage and potentially a lot of it, etc... And fail2ban works only if the first attempt was unsuccessful ;)

I guess the Q is: what could you trash on a typical embedded HTTP/HTTPS server, say a central heating controller, which has the usual bug-ridden and partly patched code from STM and bits found posted online? You could presumably crash it with malformed packets, so it needs a power cycle to run again. That is bad enough, because you can do it (the attack) remotely, and - because the code can't be patched - you can just keep doing it, every night at 3am :)

In fact the vulnerability will probably never be discovered because the customer has no access to the firmware, the vendor is not interested in supporting a 2 year old product, and in any case unless you set up a packet logger on the connection you will never find out what did it.


[0] Message Index

[#] Next page

There was an error while thanking
Go to full version