Products > Security

Embedded IoT device check

(1/5) > >>

Siwastaja:
Hi,

I'm quite new to the security stuff. We have an IoT device under development and I feel like I'm still doing some things cargo cult way, which I don't like. So would greatly appreciate a sanity check into the fundamental ideas (and smaller details, too). Anything weird or incorrect, please make a comment. I hope this is helpful to others, too, since the internet has a lot of information about configuring server infrastructure for things like HTTPS, but little about IoT client devices (but many small startups selling complete secure MQTT over TLS library solutions; but for now, we want to gain understanding and do it ourselves using widely used libraries, instead of building trust to small unknown companies).


* The device actually needs bidirectional exchange of data (information / commands) between the server ("cloud") and device, this is not an "IoT for fun" coffeemaker
* The device is complex enough that remote firmware update is essential
* Payloads are from a few bytes to a few hundred bytes per second. Latency of messages or opening the socket is irrelevant.
* Primary threat model is taking control of someone's device, feeding it unwanted commands directly, or by replacing the firmware with a malicious version
* Forward secrecy is good to have, but being able to decipher collected data years or decades later is not catastrophic (the old information is not that sensitive).

* Device only implements TLS client, and never acts as server
* Device only connects via a single TCP socket to an MQTT broker managed by us
* MQTT is used for user access control on per-topic basis (e.g., only client ABCD1234 can subscribe and publish to private/ABCD1234/#)
* Device firmware update is performed using MQTT messages, subject to the same access control

* We generate our own CA certificate (one with multiple broker domain names? Or multiple CA certs, one for each possible broker domain name?)
* During firmware flashing, each device is generated with unique 32-bit device_id, and unique private key
* Private key is not stored anywhere else except device flash (production flashing/testing PC can pipe the key to the SWD programmer software in order to not store it on disk, or even airgapping this production PC is possible).
* Getting to the private key of the device is possible with physical access to the debug pins, but with physical access, one could replace the whole device with another legitimate device under their control anyway, so leaking the private key only sacrifices the forward secrecy of that single device
* During flashing, CSR and public key are generated from the private key.
* CSR will stay unchanged for the whole product lifetime, right?
* In case new CA certs need to be installed, new client public keys can be generated based on the stored CSRs, and client public certs updated without ever touching the private key, right?
* Common Name of client certificate = unique 32-bit device ID (which also appears in mqtt topic name)
* Server which collects data and controls all the devices, has its own public and private key. The safety of this server and its key management is possibly the most important part of the whole shebang, right?
* Another threat model to be taken seriously is firmware bug allowing extraction of private key, which resides in memory. (MCU in question has no MMU, so all the code needs to be written carefully.)
* Although one would need to gain some kind of access to the devices to be able to exploit the bug. With DNS poisoning, one could get the devices to connect to a false broker, but not having the correct broker's private key, the authentication of the server would fail and the exposure area for the firmware would be greatly limited, no?

* Protocol is TLS1.3 only, getting rid of all obsolete/unsafe cipher suites.
* Client will only support the only "MUST" ciphersuite of TLS1.3 specification, namely TLS_AES_128_GCM_SHA256
* I assume TLS_AES_128_GCM_SHA256 is considered safe. I assume it offers some resource savings over TLS_AES_256_GCM_SHA384 given CPU and memory constraints. (Have not tested yet, though.)
* Interwebz says TLS_AES_128_GCM_SHA256 has a 128-bit key (source: IBM). Internetz also says TLS1.3 has minimum key size of 256 bits (source: IBM). These are in contradiction. I have no idea what components there are in a key and how the "key size" is defined. Binary key files are always significantly longer but they contain some metadata etc. Maybe I should not care and just call the functions with the correct arguments, but I'm just wondering.

* MCU is Cortex-M4 at 64MHz
* For encryption, available ROM is ~120k and RAM is ~50k
* mBEDTLS is being used
* It seems I can quite easily fit the ROM and RAM targets so far.

* RFC8446 says: "A TLS-compliant application MUST support digital signatures with rsa_pkcs1_sha256 (for certificates), rsa_pss_rsae_sha256 (for CertificateVerify and certificates)
* Does this imply I have to enable RSA support in mBEDTLS anyway (MBEDTLS_RSA_C)?
* As all certificates are managed by us, they should be EC, for memory savings if nothing else, right?
* In such case, would it be OK not to support RSA certificates in the client code? (I don't care if the device does not adhere to TLS RFC, as long as it practically interoperates correctly with the server and all-EC certs managed by us).
* EC certificates can be generated with gazillion of elliptic curves (openssl ecparam -list_curves), most of which are not available on the client (MBEDTLS_ECP_XXXX_ENABLED: Enables specific curves within the Elliptic Curve module.)
* Do we need to generate the certs (CA, broker, client) with such curve that is enabled in MBEDTLS_ECP_XXXX, and if yes, what curve is deemed sufficient / commonly used / functional / safe?
* RFC8446 says: "A TLS-compliant application MUST support digital signatures with ... ecdsa_secp256r1_sha256". This is also available in mBEDTLS. However, secp256r1 does not appear in the output of "openssl ecparam -list_curves" (OpenSSL 1.1.1f  31 Mar 2020). Does this mean OpenSSL is not TLS1.3 compliant, or that I don't understand what I am reading, and any other curve does fine for authentication, too?

* What else?

Siwastaja:
So far, I have everything running solidly, so a few comments.

* Handshake is just around 2 seconds on Cortex-M4 @ 64MHz, not a big deal
* I can now confirm IBM claiming TLS_AES_128_GCM_SHA256 uses 128-bit key is wrong. TLS_AES_128_GCM_SHA256 has 256-bit key.
* I now know that a typical private EC key file contains private key, public key, and metadata (simple integer) that enumerates the EC curve used, encoded using ASN.1 standard (basically a simple binary length-type-content encoding). Rest is headers.
* I modified mBEDTLS code to accept raw private key because nRF52 only has 128 bytes of non-volatile user registers in a separate flash region (something called "fuses" on some other MCUs). 51 bytes of key file is just too much when the actual key is just 32 bytes. Client supports only one curve, no need to communicate this.
* mBEDTLS generates public key out of private key if it's missing from the file, eyeballing it this takes just a few hundred milliseconds.

bitwelder:
My 2 eurocent comment: Ensure that mbedTLS has a robust implementation of the security protocols.
E.g. check that it is not one of the versions that has an open CVE case: https://www.cvedetails.com/google-search-results.php?q=mbedtls

Siwastaja:

--- Quote from: bitwelder on January 20, 2023, 02:11:39 pm ---My 2 eurocent comment: Ensure that mbedTLS has a robust implementation of the security protocols.
E.g. check that it is not one of the versions that has an open CVE case: https://www.cvedetails.com/google-search-results.php?q=mbedtls

--- End quote ---

That's a good point, and it actually follows that one needs to monitor such security reports regularly. It's not a fire-and-forget thing. I see no way around this.

5U4GB:
That's... a lotta questions.  Some general notes:


* Run a MITM on yourself and see what happens, e.g. mitmproxy.  You should be hardcoding ("pinning") your CA certs and not trusting any others, also make sure that the client fully verifies the certs and the FQDN in them.  The majority of Android apps, which is what we have the most data for, don't do any verification so they'll connect to anything with a cert at any location and declare it's secure.

* Don't get too caught up with algorithms and key sizes and everything else.  I could run a TLS 1.0 connection from 30-odd years ago and it'd be more secure than about 80% of the stuff out there because it never checks which certs it's getting or which FQDN it's connecting to vs. the one in the cert and a bunch of other stuff.  Get your basic checks right and you'll be doing better than most of the stuff out there.

* If you're finding TLS 1.3 confusing, consider dropping back to TLS 1.2, which is no more or less secure than 1.3 if you're using a correctly-implemented version.  TLS 1.3 has the dual problems that it was optimised to make things easier for large content providers like Google so it offloads a lot of work onto the client, and it has every idea that every person in the standards group ever had added to it, making it incredibly complex to work with.

* Again because this is important, don't get caught up with algorithms and key sizes and other distractions.  An attacker couldn't care less what algorithm you use, they'll take advantage of the fact that you aren't checking a certificate or something similar.  Crypto is bypassed, not attacked, so make sure they can't bypass your crypto.

Navigation

[0] Message Index

[#] Next page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod